CN111273837A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111273837A
CN111273837A CN201811476121.8A CN201811476121A CN111273837A CN 111273837 A CN111273837 A CN 111273837A CN 201811476121 A CN201811476121 A CN 201811476121A CN 111273837 A CN111273837 A CN 111273837A
Authority
CN
China
Prior art keywords
frame
image
target
amplified
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811476121.8A
Other languages
Chinese (zh)
Inventor
潘胜军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811476121.8A priority Critical patent/CN111273837A/en
Publication of CN111273837A publication Critical patent/CN111273837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, and provides an image processing method and device, wherein the method comprises the following steps: acquiring a first frame image and a second frame image both containing a target to be amplified; acquiring first position information and a first frame of drawing of a target to be amplified in a first frame of image, wherein the target to be amplified is in an image area corresponding to the first frame of drawing; acquiring second position information of the target to be amplified in a second frame image; calculating the movement information of the target to be amplified according to the first position information and the second position information; predicting a second drawing frame of the target to be amplified in the preset frame image according to the first position information, the movement information, the first drawing frame and the preset frame number, wherein the target to be amplified is in an image area corresponding to the second drawing frame; and amplifying an image area corresponding to the second drawing frame in the preset frame image to obtain a target enlarged image. Compared with the prior art, the method can ensure that the dynamic target can be accurately subjected to frame pulling amplification in the amplified target amplification image of the target to be amplified.

Description

Image processing method and device
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image processing method and device.
Background
The frame-pulling amplification refers to the step of pulling a frame on a multimedia image displayed on an image display page to frame an object needing to be amplified, and then zooming and amplifying are carried out.
Currently, the frame-pulling amplification is limited to real-time on a static target, but the frame-pulling amplification cannot be performed on a dynamic target because: because the dynamic target is moving all the time, after the zooming action is finished, the dynamic target may have moved out of the frame-pulling range, and the prior art cannot realize accurate frame-pulling amplification of the dynamic target.
Disclosure of Invention
The embodiment of the invention aims to provide an image processing method and an image processing device, so as to solve the problem that the prior art cannot accurately perform frame-pulling amplification on a dynamic target.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: acquiring a first frame image and a second frame image, wherein the first frame image and the second frame image both comprise a target to be amplified; acquiring first position information and a first frame of drawing of the target to be amplified in a first frame of image, wherein the target to be amplified is in an image area corresponding to the first frame of drawing; acquiring second position information of the target to be amplified in a second frame image; calculating the movement information of the target to be amplified according to the first position information and the second position information; predicting a second drawing frame of the target to be amplified in a preset frame image according to the first position information, the movement information, the first drawing frame and a preset frame number, wherein the target to be amplified is in an image area corresponding to the second drawing frame; and amplifying an image area corresponding to the second drawing frame in the preset frame image to obtain a target enlarged image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the frame image acquisition module is used for acquiring a first frame image and a second frame image, wherein the first frame image and the second frame image both comprise a target to be amplified; the first information acquisition module is used for acquiring first position information and a first frame of drawing of the target to be amplified in a first frame of image, wherein the target to be amplified is in an image area corresponding to the first frame of drawing; the second information acquisition module is used for acquiring second position information of the target to be amplified in a second frame image; the mobile information calculation module is used for calculating the mobile information of the target to be amplified according to the first position information and the second position information; a frame pulling prediction module, configured to predict a second frame pulling of the target to be amplified in a preset frame image according to the first position information, the movement information, the first frame pulling and a preset frame number, where the target to be amplified is in an image area corresponding to the second frame pulling; and the image amplification module is used for amplifying an image area corresponding to the second drawing frame in the preset frame image to obtain a target enlarged image.
Compared with the prior art, according to the image processing method and device provided by the embodiment of the invention, the second frame of the target to be amplified in the predicted preset frame image is predicted according to the first frame image and the second frame image, and the image area corresponding to the second frame area in the preset frame image is amplified to obtain the target amplified image containing the target to be amplified. By the method, the dynamic target can be accurately subjected to frame pulling amplification in the amplified target amplification image of the target to be amplified.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a schematic diagram of a terminal connection relationship provided in an embodiment of the present invention.
Fig. 2 is a block diagram of a terminal according to an embodiment of the present invention.
Fig. 3 shows a flowchart of an image processing method provided by an embodiment of the present invention.
Fig. 4 is a flowchart illustrating sub-steps of step S2 shown in fig. 3.
Fig. 5 is a flowchart illustrating sub-steps of sub-step S21 shown in fig. 4.
Fig. 6 illustrates a first selected schematic provided by an embodiment of the present invention.
Fig. 7 illustrates a first selected schematic provided by an embodiment of the present invention.
Fig. 8 is a flowchart illustrating sub-steps of sub-step S22 shown in fig. 4.
Fig. 9 is a schematic diagram illustrating a first binary image according to an embodiment of the invention.
Fig. 10 is a flowchart illustrating sub-steps of sub-step S23 shown in fig. 4.
Fig. 11 is a flowchart illustrating sub-steps of step S5 shown in fig. 3.
Fig. 12 is a flowchart illustrating sub-steps of step S6 shown in fig. 3.
Fig. 13 is a flowchart illustrating sub-steps of step S7 shown in fig. 3.
Fig. 14 is a block diagram schematically illustrating an image processing apparatus according to an embodiment of the present invention.
Icon: 100-a terminal; 101-a processor; 102-a memory; 103-a bus; 104-a communication interface; 105-a display screen; 200-an image processing apparatus; 201-frame image acquisition module; 202-a first information acquisition module; 203-a second information acquisition module; 204-mobile information calculation module; 205-a draw frame prediction module; 206-a predictive image acquisition module; 207-image magnification module; 300-a camera; 400-cloud platform.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, the terminal 100 has an image display function, and is equipped with a camera 300 or is in communication connection with the camera 300, and is also in communication connection with a cradle head 400, the camera 300 can be mechanically connected with the cradle head 400, the cradle head 400 can drive the camera 300 to rotate along with the camera 300 by rotating, specifically, the camera 300 can be disposed above the cradle head 400 or below the cradle head 400, and the present invention is not limited thereto.
The terminal 100 may be, but is not limited to, a smart phone, a tablet computer, a personal computer, a vehicle-mounted computer, a Personal Digital Assistant (PDA), and the like. Referring to fig. 2, the terminal 100 includes a processor 101, a memory 102, a bus 103, a communication interface 104 and a display screen 105, the processor 101, the memory 102, the communication interface 104 and the display screen 105 are connected via the bus 103, and the processor 101 is configured to execute an executable module, such as a computer program, stored in the memory 102.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the image processing method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor 101, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components.
The Memory 102 may comprise a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The bus 103 may be an ISA (Industry Standard architecture) bus, a PCI (peripheral component interconnect) bus, an EISA (extended Industry Standard architecture) bus, or the like. Only one bi-directional arrow is shown in fig. 2, but this does not indicate only one bus 103 or one type of bus 103.
The terminal 100 is communicatively connected to the camera 300 and the pan/tilt head 400 through at least one communication interface 104 (which may be wired or wireless). The memory 102 is used to store a program such as the image processing apparatus 200. The image processing apparatus 200 includes at least one software functional module that may be stored in the memory 102 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the terminal 100. The processor 101 executes the program to implement the image processing method after receiving the execution instruction.
The display screen 105 is used to display an image, which may be the result of some processing by the processor 101. The display screen 105 may be a touch display screen, a display screen without interactive functionality, or the like. The display screen 105 may display the first frame image, the second frame image, or the preset frame image.
It should be understood that the configuration shown in fig. 2 is only a schematic illustration of the configuration application of the terminal 100, and that the terminal 100 may also include more or fewer components than shown in fig. 2, or have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
First embodiment
Referring to fig. 3, fig. 3 is a flowchart illustrating an image processing method according to an embodiment of the invention. The image processing method comprises the following steps:
step S1, a first frame image and a second frame image are acquired, where both the first frame image and the second frame image contain an object to be enlarged.
In the embodiment of the present invention, the first frame image may be an image including an object to be enlarged, the second frame image may be an image including an object to be enlarged, and the first frame image and the second frame image may be separated by a preset frame number, where the preset frame number may be zero frame, one frame, or two frames. The first frame image and the second frame image may be captured by the camera 300, or may be frame images in a video stored in the internal memory 102 of the terminal 100. It should be noted that the shooting by the camera 300 is performed automatically according to a preset time interval, that is, the shooting time interval between two adjacent frames is equal. The time interval between the first frame image and the second frame image is (the preset number of frames +1) × the capturing time interval between adjacent frames.
Step S2, acquiring first position information and a first frame of an object to be enlarged in the first frame of image, where the object to be enlarged is in an image area corresponding to the first frame of image.
In the embodiment of the present invention, the first position information may be a centroid, a center of gravity, or a geometric center of the object to be enlarged in the first frame image. The first frame may be a frame that is formed in the first frame image and is capable of containing the object to be enlarged, with the first position information as a center, and the object to be enlarged is in an image area corresponding to the first frame. The step of obtaining the first position information and the first frame of the object to be magnified in the first frame of image can be understood as determining the object to be magnified in the first frame of image based on the selected operation on the first frame of image; secondly, performing centroid detection on the target to be amplified in the first frame image to obtain a corresponding first centroid coordinate of the target to be amplified in the first frame image; and finally, carrying out contour detection on the target to be amplified in the first frame image to obtain the contour of the target to be amplified, wherein the circumscribed rectangle of the contour is the first pull frame.
Referring to fig. 4, step S2 may further include the following sub-steps:
the sub-step S21 determines the object to be enlarged in the first frame image based on the selected operation on the first frame image.
In the embodiment of the present invention, based on the selection operation on the first frame image, it can be understood that the user directly frames out the object to be enlarged on the operation plane.
Referring to fig. 5, the sub-step S21 may further include the following sub-steps:
in sub-step S211, a selected center point in the first frame image is determined based on the operation selected for the first frame image.
In an embodiment of the present invention, the selected center point may be a center point of the selected shape in the corresponding first frame image based on the selection operation performed on the first frame image. The user's selection operation may be a frame selection of a closed shape (e.g., rectangle, circle, trapezoid), a click operation, or an irregular shape drawn on the operation plane. When the user's selection operation is to frame a closed shape, calculating a selected center point of the closed shape, for example, as shown in fig. 6; when the selected operation of the user is click operation, taking a click point corresponding to the click operation as a selected central point; when the user's selected operation is to draw an irregular shape on the operation plane, the selected center point of the irregular shape is calculated, for example, as shown in fig. 7. It should be noted that the manner of determining the selected central point in the first frame image is not limited to the ones provided in the present embodiment, and other manners capable of determining the selected central point in the first frame image are within the scope of the present invention.
And a substep S212, detecting the target to be amplified in the preset image range where the selected central point is located according to a preset rule.
In the embodiment of the present invention, the preset image may be an image containing the target to be magnified, which is preset with the selected center point as a center. Specifically, the preset image may be an image corresponding to a circle having a center at the selected center point and a radius at the preset value, or an image corresponding to another shape having the center at the selected center point. And detecting the target to be amplified in a preset image range in which the selected central point is located according to a preset rule, wherein target detection is performed in the preset image range to obtain at least one target, and the target closest to the selected central point is used as the target to be amplified.
And a substep S22 of performing centroid detection on the object to be amplified in the first frame image to obtain a first centroid coordinate corresponding to the centroid of the object to be amplified.
In the embodiment of the present invention, the first centroid coordinate may be a coordinate of a centroid of the object to be enlarged in the first frame image. Performing centroid detection on the target to be amplified in the first frame image to obtain a first centroid coordinate corresponding to the centroid of the target to be amplified, wherein the centroid coordinate is obtained by performing contour detection on the target to be amplified to obtain a second binary image containing the contour of the target to be amplified, and the second binary image comprises a plurality of second pixel points; and acquiring the coordinates of the pixel points corresponding to all the target contours to be amplified, and averaging the coordinates of the pixel points corresponding to all the target contours to be amplified to obtain the first centroid coordinate.
Referring to fig. 8, the sub-step S22 may further include the following sub-steps:
in the substep S221, a binarization process is performed on the first frame image to obtain a first binary image, where the first binary image includes a plurality of first pixel points.
In the embodiment of the present invention, the first binary image may be a binary image containing only the edge of the object to be enlarged, for example, as shown in fig. 9. The step of performing binarization processing on the first frame image to obtain a first binary image may be understood as: and carrying out target detection on the first frame image, detecting a target to be amplified in the first frame image, and carrying out binarization on the first frame image according to the target to be amplified to obtain a first binary image only containing the target to be amplified. In the sub-step S222, coordinates of target pixels with preset pixel values among the plurality of first pixels are obtained, where the target pixels form an edge of the target to be amplified.
In the embodiment of the present invention, the preset value may be 0 or 255, and when the object to be magnified in the first binary image is white and the other backgrounds are black, the preset value is 255, and when the object to be magnified in the first binary image is black and the other backgrounds are white, the preset value is 0. The target pixel points are all the pixel points with the pixel values of the preset values in the first pixel points. The step of obtaining the coordinates of the target pixel points of which all the pixel values are the preset values among the plurality of first pixel points may be understood as obtaining the coordinates of all the target pixel points constituting the edge of the target to be enlarged.
Substep S223, according to the coordinates of all target pixel points, according to the centroid formula
Figure BDA0001892234220000081
And calculating a first centroid coordinate, wherein n is the number of all target pixel points, (x, y) is the coordinate of the target pixel point, and (a, b) is the first centroid coordinate.
In the embodiment of the invention, according to the coordinates of all target pixel points and the mass center formula
Figure BDA0001892234220000082
The step of calculating the first centroid coordinate may be understood as calculating an average coordinate of all coordinates according to coordinates of all target pixel points constituting an edge of the target to be enlarged, that is, the first centroid coordinate. For example, if the coordinates of all target pixels are (2, 5), (1, 4), (2, 3), (3, 2), (4, 3), (5, 4), (4, 5) and (3, 6), respectively, and the number of all target pixels is 8, the first centroid coordinate is calculated according to the centroid formula
Figure BDA0001892234220000091
And a substep S23, performing contour detection on the object to be amplified in the first frame image to obtain a contour of the object to be amplified, and obtaining a first drawing frame according to the contour of the object to be amplified and the first centroid coordinate.
In the embodiment of the present invention, the step of performing contour detection on the object to be enlarged in the first frame image to obtain the contour of the object to be enlarged, and obtaining the first frame according to the contour of the object to be enlarged and the first centroid coordinate may be understood as performing contour detection on the object to be enlarged in the first frame image, detecting the contour of the object to be enlarged in the first frame image, obtaining coordinates corresponding to the contour of the object to be enlarged, and screening out the uppermost coordinate D (D0, D1), the lowermost coordinate B (B0, B1), the leftmost coordinate C (C0, C1), and the rightmost coordinate a (a0, a 1). According to the uppermost coordinates D (D0, D1), the lowermost coordinates B (B0, B1), the leftmost coordinates C (C0, C1) and the rightmost coordinates a (a0, a1), a rectangular frame can be obtained, coordinates of 4 vertexes of the rectangular frame are respectively (C0, B1), (a0, B1), (C0, D1) and (a0, D1), the rectangular frame surrounded by the 4 coordinates is magnified by a preset multiple (1-2 times) by taking the first centroid coordinate as the center, and a first pull frame is obtained, and the first pull frame can be represented by the coordinates of the 4 vertexes.
Referring to fig. 10, the sub-step S23 may further include the following sub-steps:
and a substep S231 of obtaining a circumscribed rectangle of the outline of the target to be amplified and corresponding circumscribed rectangle information, wherein the circumscribed rectangle information comprises a rectangle length and a rectangle width.
In the embodiment of the present invention, the circumscribed rectangle may be a minimum rectangle that can contain the outline of the object to be enlarged, and the circumscribed rectangle information may be a rectangle length and a rectangle width of the minimum rectangle that contains the outline of the object to be enlarged. Coordinates corresponding to the contour of the object to be magnified are obtained, and the uppermost coordinate D (D0, D1), the lowermost coordinate B (B0, B1), the leftmost coordinate C (C0, C1) and the rightmost coordinate A (a0, a1) are screened out, so that the length of the rectangle can be a0-C0, and the width of the rectangle can be D1-B1. For example, when the uppermost coordinate D is (4.3, 9), the lowermost coordinate B is (4, 5), the leftmost coordinate C is (3, 8), and the rightmost coordinate a is (5, 6), the rectangle length is 5-3-2, and the rectangle width is 9-5-4.
And a substep S232, calculating according to the length and the width of the rectangle to obtain the length and the width of the drawing frame.
In the embodiment of the present invention, the length of the frame may be the length of the first frame, and the width of the frame may be the width of the first frame, and the step of obtaining the length of the frame and the width of the frame is performed by calculating according to the length of the rectangle and the width of the rectangle, which may be understood as amplifying the length of the rectangle according to a first preset multiple (e.g., 1 to 2 times) to obtain the length of the frame; and amplifying the width of the rectangle according to a second preset multiple (for example, 1-2 times) to obtain the width of the pull frame. The first preset multiple and the second preset multiple may be the same or different, and are not limited herein.
For example, if the rectangle obtained through the sub-step S231 has a length of 2, a width of 4, and both the first preset multiple and the second preset multiple are 1.5, the length of the rectangle is 2 × 1.5 — 3, and the width of the rectangle is 4 × 1.5 — 6.
In the sub-step S233, a first frame is obtained according to the frame length, the frame width, and the first centroid coordinate.
In the embodiment of the present invention, the step of obtaining the first frame according to the frame length, the frame width and the first centroid coordinate may be understood as obtaining a rectangle, and the first frame may be obtained by centering the rectangle on the first centroid coordinate. For example, if the first centroid coordinate is (15, 20), the length of the frame is 3, and the width of the frame is 6, then the length and the width of the frame can obtain a rectangle with a length of 3 and a width of 6, and the rectangle is centered on the first centroid coordinate (15, 20), so as to obtain a first frame, which can be represented by four vertex coordinate points, respectively (13.5, 17), (16.5, 23), and (13.5, 23).
In step S3, second position information of the object to be enlarged in the second frame image is acquired.
In the embodiment of the present invention, the second position information may be a centroid, a center of gravity, or a geometric center of the object to be enlarged in the second frame image. It should be noted that the attributes of the first location information and the second location information should be the same, specifically, when the first location information is the centroid, the second location information is the centroid; when the first position information is the gravity center, the second position information is the gravity center; when the first position information is a geometric center, the second position information is a geometric center. The second frame may be a frame that is formed by centering on the second position information and can include the object to be enlarged, and the object to be enlarged is in the image area corresponding to the second frame. The step of obtaining the second position information of the target to be amplified in the second frame image may be understood as determining the target to be amplified in the second frame image based on the selected operation on the second frame image; and then, carrying out centroid detection on the target to be amplified in the second frame image to obtain a second centroid coordinate corresponding to the target to be amplified in the second frame image. In other embodiments of the present invention, the target detection may also be directly performed in the second frame image according to the feature information of the target to be amplified in the first frame image, instead of the selection operation on the second frame image.
It should be noted that the principle of obtaining the second position information of the object to be magnified in the second frame image and obtaining the first position information of the object to be magnified in the first frame image are substantially the same, and reference may be made to the corresponding contents in step S2 described above.
In step S4, movement information of the object to be magnified is calculated based on the first position information and the second position information.
In the embodiment of the present invention, the movement information may be a vector of movement of the object to be enlarged in each of two adjacent frames from the first frame image to the second frame image. When the preset frame number between the first frame image and the second frame image is zero, namely the first frame image and the second frame image are adjacent, the movement information is the second position information-the first position information; when the preset number of frames between the first frame image and the second frame image is one frame, that is, one frame is spaced between the first frame image and the second frame image, the movement information is (second position information-first position information)/2; when the preset number of frames between the first frame image and the second frame image is two frames, i.e., two frames are spaced between the first frame image and the second frame image, the movement information is (second position information-first position information)/3.
The following explains the first position information as a first centroid coordinate, the second position information as a second centroid coordinate, and a preset number of frames between the first frame image and the second frame image, which is one frame.
When the first centroid coordinate is (15, 20) and the second centroid coordinate is (11, 8), then
Figure BDA0001892234220000111
Figure BDA0001892234220000112
Step S5, predicting a second frame of the object to be enlarged in the preset frame image according to the first position information, the movement information, the first frame and the preset frame number, wherein the object to be enlarged is in an image area corresponding to the second frame.
In the embodiment of the present invention, the preset frame image may be an image in which the user wants to perform frame-pulling amplification on the object to be amplified, and the preset frame number may be an interval frame number (for example, 5) between the preset frame image and the first frame image set by the user. The preset frame number is larger than the preset frame number, namely the interval frame number between the preset frame image and the first frame image is larger than the interval frame number between the second frame image and the first frame image, and the preset frame image is behind the second frame image. The second frame may be a frame that is able to include an object to be enlarged in a corresponding image area in the preset frame image, and the first frame and the second frame may have the same size. Predicting a second frame of the target to be amplified in the preset frame image according to the first position information, the movement information, the first frame and the preset frame number, wherein first, third position information of the target to be amplified in the preset frame image can be predicted according to the first position information, the movement information and the preset frame number; and then, obtaining a second pull frame according to the third position information and the length and the width of the first pull frame.
Referring to fig. 11, step S5 may further include the following sub-steps:
and a substep S51 of predicting third position information of the object to be enlarged in the preset frame image based on the first position information, the movement information, and the preset frame number.
In the embodiment of the present invention, the third position information may be a centroid, a center of gravity, or a geometric center of the object to be enlarged predicted in the preset frame image. The third position information should have the same attributes as the first position information and the second position information, specifically, when the first position information is the centroid, the second position information is the centroid, and the third position information is the centroid; when the first position information is the gravity center, the second position information is the gravity center, and the third position information is the gravity center; when the first position information is the geometric center, the second position information is the geometric center, and the third position information is the geometric center. And predicting third position information of the target to be amplified in the preset frame image according to the first position information, the movement information and the preset frame number, wherein the third position information is the first position information + the movement information (preset frame number + 1). Specifically, when the first position information is the first centroid coordinate (15, 20), the movement information is (2, 6), and the preset frame number is 5, the third position information is (15, 20) + (2, 6) × (5+1) ═ 27, 56.
And a substep S52, obtaining a second frame in the preset frame image according to the third position information and the first frame.
In the embodiment of the present invention, the step of obtaining the second frame in the preset frame image according to the third position information and the first frame may be understood as obtaining the second frame according to the frame length and the frame width of the first frame, with the third position information as a center. For example, when the third position information is (27, 56), the length of the first frame is 3, and the width of the first frame is 6, and the second frame is a rectangular frame centered on (27, 56), the length of the first frame is 3, and the width of the first frame is 6. Specifically, the second frame can be represented by 4 vertex coordinates, which are (25.5, 53), (28.5, 59), and (25.5, 59), respectively.
In step S6, a preset frame image is acquired.
In the embodiment of the present invention, the step of acquiring the preset frame image may be understood as that the preset frame image is acquired in a manner consistent with the first frame image and the second frame image, specifically, when the first frame image and the second frame image are frame images in a video stored in the internal memory 102 of the terminal 100, the preset frame image is a frame image separated from the first frame image by a preset number of frames; when the first frame image and the second frame image are images automatically photographed by the camera 300 at a preset time interval, the preset frame image is also an image photographed by the camera 300 after the first frame image is separated by a preset number of frames.
When any vertex coordinate of the second frame exceeds the coordinate of the display page (the dynamic target may have already exited the shooting area), that is, the image area corresponding to the second frame has already exceeded the original shooting range, the pan/tilt head 400 needs to be controlled to rotate to drive the camera 300 to rotate, so as to change the shooting range, and it is ensured that the obtained preset frame image contains the target to be amplified.
The following explains how to control the rotation of the cradle head 400 by taking as an example that the preset frame image is an image obtained by shooting through the camera 300, and the image area corresponding to the second pull frame is beyond the original shooting range.
Referring to fig. 12, step S6 may further include the following sub-steps:
in sub-step S61, the center coordinates of the first frame image are acquired.
In the embodiment of the present invention, the central coordinate may be a coordinate corresponding to a central point in the first frame image, and the step of obtaining the central coordinate of the first frame image may be understood as that the central coordinate of the first frame image is (100, 50) when the first frame image is 200 × 100, and the coordinates of the corresponding four vertices of the first frame image are (0, 0), (200, 100), and (0, 100).
In sub-step S62, a plurality of vertex coordinates of the second frame are obtained.
In this embodiment of the present invention, the plurality of vertex coordinates may be coordinates of 4 vertices corresponding to the second frame, and the sub-step S52 may obtain the second frame, where the second frame may be represented by 4 vertex coordinates, and obtain the 4 vertex coordinates of the second frame in the sub-step S52.
And a substep S63 of calculating the rotation angle based on the center coordinates and the plurality of vertex coordinates.
In the embodiment of the present invention, the rotation angle may be an angle at which the camera 300 rotates to ensure that the target to be magnified is in the preset frame image. The rotation angle may include a horizontal rotation angle and a vertical rotation angle. The coordinates of 4 vertexes of the second drawing frame are respectively (x)1,y1)、(x2,y1)、(x2,y2) And (x)1,y2) Wherein x is1<x2,y1<y2The center coordinate is (x)mind,ymind). The step of calculating the rotation angle in terms of the center coordinates and the plurality of vertex coordinates, may be understood as,
Figure BDA0001892234220000141
Figure BDA0001892234220000142
wherein, VPAt a horizontal angle of view, P0For an initial horizontal viewing angle, xallThe total number of pixel points in the horizontal direction of the displayed page is shown;
Figure BDA0001892234220000143
Figure BDA0001892234220000144
wherein, VTAt a vertical field angle, T0To an initial vertical viewing angle, yallThe total number of pixels in the vertical direction of the displayed page. When the first frame image is shot, the horizontal visual angle corresponding to the central point is the initial horizontal visual angle P0The corresponding vertical viewing angle is the initial horizontal viewing angle T0
And a substep S64, controlling the pan/tilt head 400 to rotate according to the rotation angle, driving the camera 300 to rotate, shooting the target to be amplified to obtain a preset frame image, and sending the preset frame image to the terminal 100.
In the embodiment of the present invention, the pan/tilt head 400 is controlled to rotate according to the rotation angle, specifically, to rotate in the horizontal direction according to the horizontal rotation angle, and rotate in the vertical direction according to the vertical rotation angle, and after the rotation in the horizontal direction and the rotation in the vertical direction are completed, after a first interval time from the shooting of the first frame image, the target to be magnified is shot, so as to obtain the preset frame image. The first interval time is related to a preset frame number and a preset time interval between the adjacent frame images captured by the camera 300, and specifically, the first time interval is equal to the preset time interval (preset frame number + 1). For example, if the preset time interval is 0.02s and the preset frame number is 14, and the first time interval is 0.02 (14+1) 0.3s, that is, after the camera 300 captures the first frame image, the pan/tilt head 400 is controlled to rotate, capture the preset frame image at 0.3s from the first frame image, and transmit the preset frame image to the terminal 100.
By rotating the holder 400, the camera 300 is driven to rotate to adjust the shooting range of the camera 300, so that the target to be amplified is ensured to be in the image area corresponding to the second pull frame of the preset frame image and be located in the middle area of the preset frame image.
And step S7, amplifying an image area corresponding to the second drawing frame in the preset frame image to obtain a target enlarged image.
In the embodiment of the present invention, the target enlarged image may be an image obtained by enlarging an image area corresponding to the second frame-pulling area in the preset frame image. The step of amplifying an image area corresponding to a second frame in the preset frame image to obtain a target enlarged image may be understood as obtaining a magnification factor, and then amplifying the image area corresponding to the second frame according to the magnification factor to obtain the target enlarged image.
Referring to fig. 13, step S7 may further include the following sub-steps:
and a substep S71 of obtaining the magnification according to the first frame and the preset view.
In the embodiment of the present invention, the magnification may be a magnification of an image region corresponding to the second frame to obtain a target enlarged image. The preset view may be a view displayed on a display page of the display screen 105 of the terminal 100, and the view length and the view width thereof are fixed. The step of obtaining the magnification factor according to the first frame and the preset view can be understood as: acquiring the length and the width of a first pull frame (equal to a second pull frame); acquiring preset view length and view width; calculating the length of the pull frame and the length of the view to obtain a first ratio; calculating the width of the draw frame and the width of the view to obtain a second ratio; comparing the first ratio with the second ratio; when the first ratio is larger than the second ratio, taking the second ratio as a magnification; and when the first ratio is smaller than or equal to the second ratio, taking the first ratio as the magnification. The view length is the length of the display page, and the view width is the width of the display page.
In other embodiments of the present inventionThe step of obtaining the magnification according to the first frame and the preset view can be understood as follows: acquiring the length and the width of a first pull frame (equal to a second pull frame); calculating a frame pulling area S1 according to the frame pulling length and the frame pulling width; acquiring preset view length and view width; calculating a view area S2 from the view length and the view width; according to the formula
Figure BDA0001892234220000161
To obtain a magnification factor, Z0Is the initial magnification, i.e. the magnification at which the first frame image was taken.
And a substep S72, amplifying an image area corresponding to the second frame in the preset frame image according to the amplification factor to obtain a target enlarged image.
In the embodiment of the present invention, an image area corresponding to a second frame in a preset frame image is enlarged according to a magnification factor to obtain a target enlarged image, for example, when the magnification factor is 4, the image area corresponding to the second frame is enlarged by 4 times to obtain the target enlarged image, and only the target enlarged image is displayed on a display page.
Compared with the prior art, the embodiment of the invention has the following advantages:
firstly, the movement information is obtained through the first position information of the first frame image and the second position information of the second frame image, and the third position information is predicted according to the movement information, the first position information and the preset frame number, so that the motion trail of the dynamic target can be predicted.
Secondly, by rotating the pan/tilt head 400, the shooting range of the camera 300 is adjusted, and it is ensured that the target to be amplified is located in the image area corresponding to the second frame of the preset frame image and located in the middle area of the preset frame image.
And finally, predicting a second drawing frame and amplifying an image area corresponding to the second drawing frame, so that the dynamic target can be accurately drawn and amplified in the amplified target amplification image of the target to be amplified.
Second embodiment
Referring to fig. 14, fig. 14 is a block diagram illustrating an image processing apparatus 200 according to an embodiment of the invention. The image processing apparatus 200 includes a frame image acquisition module 201, a first information acquisition module 202, a second information acquisition module 203, a movement information calculation module 204, a frame pull prediction module 205, a prediction image acquisition module 206, and an image enlargement module 207.
The frame image acquiring module 201 is configured to acquire a first frame image and a second frame image, where the first frame image and the second frame image both include an object to be amplified.
It is understood that the frame image acquisition module 201 may perform the above step S1.
The first information obtaining module 202 is configured to obtain first position information of the object to be enlarged in the first frame image and a first frame, where the object to be enlarged is in an image area corresponding to the first frame.
It is understood that the first information obtaining module 202 may execute the above step S2.
In this embodiment of the present invention, the first information obtaining module 202 is specifically configured to: determining a target to be amplified in the first frame image based on the selected operation on the first frame image; carrying out centroid detection on the target to be amplified in the first frame image to obtain a first centroid coordinate corresponding to the centroid of the target to be amplified; and carrying out contour detection on the target to be amplified in the first frame image to obtain the contour of the target to be amplified, and obtaining a first pull frame according to the contour of the target to be amplified and the first centroid coordinate.
The first information obtaining module 202 executes a step of determining an object to be enlarged in the first frame image based on the selected operation on the first frame image, and specifically includes: determining a selected central point in the first frame image based on the operation selected on the first frame image; and detecting the target to be amplified in the preset image range where the selected central point is located according to a preset rule.
The first information obtaining module 202 performs a step of performing centroid detection on the target to be amplified in the first frame image to obtain a first centroid coordinate corresponding to the centroid of the target to be amplified, and specifically includes: carrying out binarization processing on the first frame image to obtain a first binary image, wherein the first binary image comprisesA plurality of first pixel points; obtaining coordinates of target pixel points with all pixel values as preset values in the first pixel points, wherein the target pixel points form the edge of a target to be amplified; according to the coordinates of all target pixel points and the mass center formula
Figure BDA0001892234220000171
And calculating a first centroid coordinate, wherein n is the number of all target pixel points, (x, y) is the coordinate of the target pixel point, and (a, b) is the first centroid coordinate.
The first information obtaining module 202 executes a method of obtaining a first frame drawing step according to the contour of the object to be magnified and the first centroid coordinate, and specifically includes: acquiring a circumscribed rectangle of the outline of the target to be amplified and corresponding circumscribed rectangle information, wherein the circumscribed rectangle information comprises a rectangle length and a rectangle width; calculating according to the length and the width of the rectangle to obtain the length and the width of a pull frame; and obtaining a first drawing frame according to the drawing frame length, the drawing frame width and the first centroid coordinate.
And the second information acquisition module 203 is used for acquiring second position information of the target to be amplified in the second frame image.
It is understood that the second information acquiring module 203 may perform the above step S3.
And the movement information calculating module 204 is configured to calculate movement information of the target to be amplified according to the first position information and the second position information.
It is understood that the movement information calculation module 204 may perform the above step S4.
And a frame-pulling prediction module 205, configured to predict a second frame pulling of the target to be amplified in the preset frame image according to the first position information, the movement information, the first frame pulling and the preset frame number, where the target to be amplified is in an image area corresponding to the second frame pulling.
It is understood that the draw frame prediction module 205 may perform the step S5 described above.
In an embodiment of the present invention, the frame-pulling prediction module 205 is specifically configured to: predicting third position information of the target to be amplified in a preset frame image according to the first position information, the movement information and the preset frame number; and obtaining a second frame in the preset frame image according to the third position information and the first frame.
And a prediction image obtaining module 206, configured to obtain a preset frame image.
It is to be understood that the predictive image acquisition module 206 may perform the above-described step S6.
In this embodiment of the present invention, the predicted image obtaining module 206 is specifically configured to: acquiring a central coordinate of a first frame image; obtaining a plurality of vertex coordinates of the second pull frame; calculating a rotation angle according to the center coordinate and the plurality of vertex coordinates; the cradle head 400 is controlled to rotate according to the rotation angle, the camera 300 is driven to rotate, the target to be amplified is shot, a preset frame image is obtained, and the preset frame image is sent to the terminal 100.
And the image amplifying module 207 is configured to amplify an image area corresponding to the second frame in the preset frame image to obtain a target enlarged image.
It is understood that the image enlargement module 207 may perform the above step S7.
In the embodiment of the present invention, the image magnifying module 207 is specifically configured to: obtaining a magnification factor according to the first drawing frame and a preset view; and amplifying an image area corresponding to the second drawing frame in the preset frame image according to the amplification factor to obtain a target enlarged image.
The image magnification module 207 executes a method of obtaining a magnification factor according to the first frame and the preset view, and specifically includes: acquiring the length and the width of a first pull frame; acquiring the view length and the view width of a preset view; calculating the length of the pull frame and the length of the view to obtain a first ratio; calculating the width of the draw frame and the width of the view to obtain a second ratio; comparing the first ratio to the second ratio; when the first ratio is larger than the second ratio, taking the second ratio as a magnification; and when the first ratio is smaller than or equal to the second ratio, taking the first ratio as the magnification.
In summary, the present invention provides an image processing method and an image processing apparatus, where the method includes: acquiring a first frame image and a second frame image, wherein the first frame image and the second frame image both comprise a target to be amplified; acquiring first position information and a first frame of drawing of a target to be amplified in a first frame of image, wherein the target to be amplified is in an image area corresponding to the first frame of drawing; acquiring second position information of the target to be amplified in a second frame image; calculating the movement information of the target to be amplified according to the first position information and the second position information; predicting a second drawing frame of the target to be amplified in the preset frame image according to the first position information, the movement information, the first drawing frame and the preset frame number, wherein the target to be amplified is in an image area corresponding to the second drawing frame; acquiring a preset frame image; and amplifying an image area corresponding to the second drawing frame in the preset frame image to obtain a target enlarged image. Compared with the prior art that the frame-pulling amplification is limited to the real-time static target, the image processing method provided by the invention can ensure that the dynamic target can be accurately subjected to frame-pulling amplification in the amplified target amplification image of the target to be amplified.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a first frame image and a second frame image, wherein the first frame image and the second frame image both comprise a target to be amplified;
acquiring first position information and a first frame of drawing of the target to be amplified in a first frame of image, wherein the target to be amplified is in an image area corresponding to the first frame of drawing;
acquiring second position information of the target to be amplified in a second frame image;
calculating the movement information of the target to be amplified according to the first position information and the second position information;
predicting a second drawing frame of the target to be amplified in a preset frame image according to the first position information, the movement information, the first drawing frame and a preset frame number, wherein the target to be amplified is in an image area corresponding to the second drawing frame;
and amplifying an image area corresponding to the second drawing frame in the preset frame image to obtain a target enlarged image.
2. The method of claim 1, wherein the first position information includes a first centroid coordinate, and the step of acquiring the first position information and the first draw frame of the object to be magnified in the first frame image:
determining a target to be amplified in a first frame image based on the selected operation of the first frame image;
carrying out centroid detection on the target to be amplified in the first frame image to obtain a first centroid coordinate corresponding to the centroid of the target to be amplified;
and carrying out contour detection on the target to be amplified in the first frame image to obtain the contour of the target to be amplified, and obtaining a first pull frame according to the contour of the target to be amplified and the first centroid coordinate.
3. The method of claim 2, wherein the step of determining the object to be enlarged in the first frame image based on the selected operation on the first frame image comprises:
determining a selected central point in the first frame image based on the operation selected on the first frame image;
and detecting the target to be amplified in the preset image range where the selected central point is located according to a preset rule.
4. The method as claimed in claim 2, wherein the step of performing centroid detection on the object to be magnified in the first frame image to obtain a first centroid coordinate corresponding to the centroid of the object to be magnified comprises:
performing binarization processing on the first frame image to obtain a first binary image, wherein the first binary image comprises a plurality of first pixel points;
obtaining coordinates of target pixel points of which all pixel values are preset values in the first pixel points, wherein the target pixel points form the edge of a target to be amplified;
according to the coordinates of all target pixel points and the mass center formula
Figure FDA0001892234210000021
And calculating a first centroid coordinate, wherein n is the number of all target pixel points, x, y is the coordinate of the target pixel point, and a, b is the first centroid coordinate.
5. The method of claim 2, wherein the step of deriving a first draw frame from the contour of the object to be magnified and the first centroid coordinates comprises:
acquiring a circumscribed rectangle of the outline of a target to be amplified and corresponding circumscribed rectangle information, wherein the circumscribed rectangle information comprises a rectangle length and a rectangle width;
calculating according to the rectangle length and the rectangle width to obtain a pull frame length and a pull frame width;
and obtaining a first drawing frame according to the drawing frame length, the drawing frame width and the first centroid coordinate.
6. The method as claimed in claim 1, wherein the step of predicting a second frame of the object to be enlarged in a preset frame image based on the first position information, the motion information, the first frame, and a preset frame number comprises:
predicting third position information of the target to be amplified in a preset frame image according to the first position information, the movement information and the preset frame number;
and obtaining a second frame in the preset frame image according to the third position information and the first frame.
7. The method according to claim 1, wherein the step of enlarging an image area corresponding to the second frame in the preset frame image to obtain a target enlarged image includes:
obtaining a magnification factor according to the first drawing frame and a preset view;
amplifying an image area corresponding to a second drawing frame in the preset frame image according to the amplification factor to obtain a target enlarged image;
before the step of magnifying an image area corresponding to a second frame in the preset frame image to obtain a target magnified image, the method further includes:
and acquiring the preset frame image.
8. The method of claim 7, wherein the step of obtaining the magnification based on the first frame and the predetermined view comprises:
acquiring the length and the width of the first pull frame;
acquiring the view length and the view width of the preset view;
calculating the length of the pull frame and the length of the view to obtain a first ratio;
calculating the width of the pull frame and the width of the view to obtain a second ratio;
comparing the first ratio to the second ratio;
when the first ratio is larger than the second ratio, taking the second ratio as a magnification;
and when the first ratio is smaller than or equal to the second ratio, taking the first ratio as a magnification factor.
9. The method of claim 7, wherein the method is applied to a terminal, the terminal is in communication connection with both a camera and a pan-tilt head, and the camera is disposed on the pan-tilt head; the step of obtaining the preset frame image includes:
acquiring a central coordinate of the first frame image;
obtaining a plurality of vertex coordinates of the second drawing frame;
calculating a rotation angle according to the central coordinate and the vertex coordinates;
and controlling the holder to rotate according to the rotation angle to drive the camera to rotate, shooting the target to be amplified to obtain the preset frame image, and sending the preset frame image to the terminal.
10. An image processing apparatus, characterized in that the apparatus comprises:
the frame image acquisition module is used for acquiring a first frame image and a second frame image, wherein the first frame image and the second frame image both comprise a target to be amplified;
the first information acquisition module is used for acquiring first position information and a first frame of drawing of the target to be amplified in a first frame of image, wherein the target to be amplified is in an image area corresponding to the first frame of drawing;
the second information acquisition module is used for acquiring second position information of the target to be amplified in a second frame image;
the mobile information calculation module is used for calculating the mobile information of the target to be amplified according to the first position information and the second position information;
a frame pulling prediction module, configured to predict a second frame pulling of the target to be amplified in a preset frame image according to the first position information, the movement information, the first frame pulling and a preset frame number, where the target to be amplified is in an image area corresponding to the second frame pulling;
and the image amplification module is used for amplifying an image area corresponding to the second drawing frame in the preset frame image to obtain a target enlarged image.
CN201811476121.8A 2018-12-04 2018-12-04 Image processing method and device Pending CN111273837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811476121.8A CN111273837A (en) 2018-12-04 2018-12-04 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811476121.8A CN111273837A (en) 2018-12-04 2018-12-04 Image processing method and device

Publications (1)

Publication Number Publication Date
CN111273837A true CN111273837A (en) 2020-06-12

Family

ID=71001413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811476121.8A Pending CN111273837A (en) 2018-12-04 2018-12-04 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111273837A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085782A (en) * 2020-09-08 2020-12-15 广州医软智能科技有限公司 Amplification factor detection method and detection device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102710896A (en) * 2012-05-07 2012-10-03 浙江宇视科技有限公司 Method and device for drawing frame to zoom in of dynamic targets
WO2014147945A1 (en) * 2013-03-19 2014-09-25 Sony Corporation Image processing method, image processing device and image processing program
CN105678808A (en) * 2016-01-08 2016-06-15 浙江宇视科技有限公司 Moving object tracking method and device
CN105931182A (en) * 2016-04-15 2016-09-07 惠州Tcl移动通信有限公司 Image zooming in/out method and system
CN107037962A (en) * 2015-10-23 2017-08-11 株式会社摩如富 Image processing apparatus, electronic equipment and image processing method
CN107645628A (en) * 2016-07-21 2018-01-30 中兴通讯股份有限公司 A kind of information processing method and device
CN108805924A (en) * 2018-05-22 2018-11-13 湘潭大学 A kind of lily picking independent positioning method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102710896A (en) * 2012-05-07 2012-10-03 浙江宇视科技有限公司 Method and device for drawing frame to zoom in of dynamic targets
WO2014147945A1 (en) * 2013-03-19 2014-09-25 Sony Corporation Image processing method, image processing device and image processing program
CN107037962A (en) * 2015-10-23 2017-08-11 株式会社摩如富 Image processing apparatus, electronic equipment and image processing method
CN105678808A (en) * 2016-01-08 2016-06-15 浙江宇视科技有限公司 Moving object tracking method and device
CN105931182A (en) * 2016-04-15 2016-09-07 惠州Tcl移动通信有限公司 Image zooming in/out method and system
CN107645628A (en) * 2016-07-21 2018-01-30 中兴通讯股份有限公司 A kind of information processing method and device
CN108805924A (en) * 2018-05-22 2018-11-13 湘潭大学 A kind of lily picking independent positioning method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085782A (en) * 2020-09-08 2020-12-15 广州医软智能科技有限公司 Amplification factor detection method and detection device

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
TWI527448B (en) Imaging apparatus, image processing method, and recording medium for recording program thereon
CN110796664B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111091590A (en) Image processing method, image processing device, storage medium and electronic equipment
CN109313797B (en) Image display method and terminal
US10362231B2 (en) Head down warning system
TWI546726B (en) Image processing methods and systems in accordance with depth information, and computer program prodcuts
US20160353021A1 (en) Control apparatus, display control method and non-transitory computer readable medium
CN114390201A (en) Focusing method and device thereof
US9094617B2 (en) Methods and systems for real-time image-capture feedback
US20180205877A1 (en) Information processing apparatus, information processing method, system, and non-transitory computer-readable storage medium
CN111754414B (en) Image processing method and device for image processing
US10965858B2 (en) Image processing apparatus, control method thereof, and non-transitory computer-readable storage medium for detecting moving object in captured image
CN111273837A (en) Image processing method and device
US20150281585A1 (en) Apparatus Responsive To At Least Zoom-In User Input, A Method And A Computer Program
WO2017024954A1 (en) Method and device for image display
US20200134840A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
WO2021180294A1 (en) Imaging device and method for efficient capture of stationary objects
US10860169B2 (en) Method, apparatus or computer program for user control of access to displayed content
JP2014085845A (en) Moving picture processing device, moving picture processing method, program and integrated circuit
JP2018097310A (en) Display control program, display control method and display control device
CN108431867B (en) Data processing method and terminal
JP6503478B2 (en) Mobile terminal, image processing method, and program
JP7446504B2 (en) Display method and video processing method
JP4914659B2 (en) VIDEO PROCESSING DEVICE, METHOD THEREOF, PROGRAM THEREOF, AND RECORDING MEDIUM CONTAINING THE PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612