CN107645628B - Information processing method and device - Google Patents

Information processing method and device Download PDF

Info

Publication number
CN107645628B
CN107645628B CN201610578837.3A CN201610578837A CN107645628B CN 107645628 B CN107645628 B CN 107645628B CN 201610578837 A CN201610578837 A CN 201610578837A CN 107645628 B CN107645628 B CN 107645628B
Authority
CN
China
Prior art keywords
target image
boundary
image area
value
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610578837.3A
Other languages
Chinese (zh)
Other versions
CN107645628A (en
Inventor
魏祺韡
张本好
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201610578837.3A priority Critical patent/CN107645628B/en
Priority to PCT/CN2017/000084 priority patent/WO2018014517A1/en
Publication of CN107645628A publication Critical patent/CN107645628A/en
Application granted granted Critical
Publication of CN107645628B publication Critical patent/CN107645628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

The embodiment of the invention discloses an information processing method, which comprises the steps of obtaining position information of a target image in a preview image, and determining a target image area and parameter information of the target image area according to the position information of the target image; determining an adjusting parameter according to parameter information of a target image area and a boundary parameter of a preview image, displaying the target image area in an enlarged manner according to the adjusting parameter, and prompting a moving direction when a distance between the boundary of the target image area and the boundary of the preview image meets a preset condition so that the target image area is in the preview image. The embodiment of the invention also discloses an information processing device.

Description

Information processing method and device
Technical Field
The present invention relates to image processing technology in the field of information processing, and in particular, to an information processing method and apparatus.
Background
With the development of electronic technology, the more functions of terminals, such as a photographing function, are. When watching scenes such as a large football game, a basketball game, a star concert, a large performance, observing small animals and the like, a user can take pictures or record videos of moving objects through the shooting function of the terminal. When shooting a dynamic scene, a user needs to track a shooting object in real time in the shooting process because the shooting object is in a moving state, so that the shooting object is in the shooting range of the terminal.
During shooting, if an image of a photographic subject on a shooting screen is small, although the entire motion process of the photographic subject can be tracked, the photographic subject in the shooting screen may be obscured because the display is too small. In the prior art, although the focal length can be enlarged manually to display the shot object in the shot picture, the shot object may run out of the shot picture when moving slightly, and the user needs to continuously track the shot object, so that the operation is complicated, and the user may not be tracked when the shot object moves fast, thereby affecting the use of the user.
Disclosure of Invention
In order to solve the existing technical problems, the invention provides an information processing method and device, which solve the problems of unclear shooting and difficult tracking when a moving target object is shot and improve the user experience.
The technical scheme of the invention is realized as follows:
an information processing method, the method comprising:
acquiring position information of a target image in a preview image;
determining a target image area and parameter information of the target image area according to the position information of the target image;
determining an adjusting parameter according to the parameter information of the target image area and the boundary parameter of the preview image;
adjusting the target image area according to the adjusting parameter;
when the distance between the boundary of the target image area and the boundary of the preview image meets a preset condition, prompting a moving direction to enable the target image area to be in the preview image.
Optionally, the obtaining of the position information of the target image in the preview image includes:
if an operation instruction of a user is received within preset time, acquiring a first target image in the preview image and position information of the first target image;
if an operation instruction of a user is not received within a preset time, determining that all moving images in the preview image are second target images, and acquiring position information of the second target images, wherein the second target images comprise N target images, and N is a natural number greater than or equal to 1.
Optionally, the determining a target image area and parameter information of the target image area according to the position information of the target image includes:
determining the first target image area according to the position information of the first target image, wherein the first target image area is the minimum circumscribed rectangle of the first target image;
acquiring first parameter information of the first target image area, wherein the first parameter information comprises: coordinate information of the first target image area.
Optionally, the determining a target image area and parameter information of the target image area according to the position information of the target image includes:
determining N target image areas according to the position information of the N target images, wherein each target image area in the N target image areas is the minimum circumscribed rectangle of each target image area;
determining a second target image area according to the N target image areas;
and acquiring second parameter information of the second target image area, wherein the second parameter information comprises coordinate information of the second target image area.
Optionally, the determining a second target image area according to the N target image areas includes:
acquiring N areas of the N target image areas, and determining a third target image area and a fourth target image area, wherein the third target image area is a target image area with the largest area in the N areas, and the fourth target image area comprises: a target image area other than the third target image area;
determining that the ratio of the area of each target image region in the fourth target image region to the maximum area is a first numerical value, and determining that the distance between the center position of each target image region in the fourth target image region and the center position of the third target image region is a second numerical value;
determining that the first numerical value is greater than or equal to a first preset value, and the target image area of which the second numerical value is less than or equal to a second preset value is a fifth target image area;
and determining the minimum circumscribed rectangle of the fifth target image area as a second target image area.
Optionally, the determining an adjustment parameter according to the parameter information of the target image area and the boundary parameter of the preview image includes:
acquiring a value of a first boundary and a value of a second boundary of the preview image, wherein the preview image is rectangular, and the first boundary is vertical to the second boundary;
determining a value of a third boundary and a value of a fourth boundary of the target image area according to the parameter information of the target image area, wherein the target image area is rectangular, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary;
the ratio of the value of the first boundary to the value of the third boundary is a third value, and the ratio of the value of the second boundary to the value of the fourth boundary is a fourth value;
and determining the minimum value of the third numerical value and the fourth numerical value as the adjusting parameter.
Optionally, the adjusting the target image area according to the adjustment parameter includes:
and amplifying and displaying the target image area according to the adjusting parameter.
Optionally, when the distance between the boundary of the target image area and the boundary of the preview image satisfies a preset condition, prompting a moving direction includes:
and when the distance between a fifth boundary of the target image area and a sixth boundary of the preview image is smaller than or equal to a third preset value, prompting a moving direction, wherein the moving direction indicates to move to the sixth boundary, the fifth boundary is any one boundary of the target image area, the sixth boundary is parallel to the fifth boundary, and the distance between the sixth boundary and the fifth boundary is minimum.
Optionally, before the obtaining of the position information of the target image in the preview image, the method further includes:
and acquiring the preview image.
An information processing apparatus, the apparatus comprising: the device comprises an acquisition module, a determination module, a processing module and a prompt module; wherein the content of the first and second substances,
the acquisition module is used for acquiring the position of a target image in the preview image;
the determining module is used for determining a target image area and parameter information of the target image area according to the position information of the target image; the image previewing device is also used for determining an adjusting parameter according to the parameter information of the target image area and the boundary parameter of the preview image;
the processing module is used for adjusting the target image area according to the adjusting parameter;
the prompting module is used for prompting the moving direction when the distance between the boundary of the target image area and the boundary of the preview image meets a preset condition so as to enable the target image area to be in the preview image.
Optionally, the obtaining module is configured to receive an operation instruction of a user within a preset time, and obtain a first target image in the preview image and position information of the first target image;
the determining module is used for determining all the moving images in the preview image as second target images if the operation instruction of the user is not received within the preset time;
the obtaining module is further configured to obtain position information of the second target image, where the second target image includes N target images, and N is a natural number greater than or equal to 1.
Optionally, the determining module is configured to determine the first target image area according to the position information of the first target image, where the first target image area is a minimum circumscribed rectangle of the first target image;
the acquiring module is configured to acquire first parameter information of the first target image area, where the first parameter information includes: coordinate information of the first target image area.
Optionally, the determining module is configured to determine N target image areas according to position information of the N target images, where each of the N target image areas is a minimum circumscribed rectangle of each of the target image areas; the image processing device is also used for determining a second target image area according to the N target image areas;
the obtaining module is configured to obtain second parameter information of the second target image area, where the second parameter information includes coordinate information of the second target image area.
Optionally, the obtaining module is configured to obtain N areas of the N target image regions;
the determining module is configured to determine a third target image region and a fourth target image region, where the third target image region is a target image region with a largest area among the N areas, and the fourth target image region includes: a target image area other than the third target image area; the distance determination module is further configured to determine that a ratio of an area of each target image region in the fourth target image region to the maximum area is a first value, and determine that a distance between a center position of each target image region in the fourth target image region and a center position of the third target image region is a second value; the image processing device is also used for determining that the first numerical value is greater than or equal to a first preset value, and the target image area of which the second numerical value is less than or equal to a second preset value is a fifth target image area; and the minimum circumscribed rectangle of the fifth target image area is also used for determining the second target image area.
Optionally, the obtaining module is configured to obtain a value of a first boundary and a value of a second boundary of the preview image, where the preview image is a rectangle, the first boundary is perpendicular to the second boundary, and the value of the first boundary is greater than or equal to the value of the second boundary;
the determining module is configured to determine a value of a third boundary and a value of a fourth boundary of the target image area according to parameter information of the target image area, where the target image area is rectangular, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary; the ratio of the value of the first boundary to the value of the third boundary is a third value, and the ratio of the value of the second boundary to the value of the fourth boundary is a fourth value; and is further configured to determine a smallest value of the third and fourth values as the adjustment parameter.
Optionally, the processing module is configured to enlarge and display the target image area according to the adjustment parameter.
Optionally, the prompting module is configured to prompt a moving direction when a distance between a fifth boundary of the target image area and a sixth boundary of the preview image is smaller than or equal to a third preset value, where the moving direction indicates that the target image area moves to the sixth boundary, the fifth boundary is any one boundary of the target image area, the sixth boundary is parallel to the fifth boundary, and a distance between the sixth boundary and the fifth boundary is minimum.
Optionally, the obtaining module is further configured to obtain the preview image.
The information processing method and the device provided by the embodiment of the invention can acquire the position information of the target image in the preview image, and determine the target image area and the parameter information of the target image area according to the position information of the target image; determining an adjusting parameter according to parameter information of a target image area and a boundary parameter of a preview image, displaying the target image area in an enlarged manner according to the adjusting parameter, and prompting a moving direction when a distance between the boundary of the target image area and the boundary of the preview image meets a preset condition so that the target image area is in the preview image. According to the information processing method and device provided by the embodiment of the invention, the adjusting parameter can be determined according to the proportional relation between the preview image and the boundary parameter of the target image, and the target image is amplified and displayed through the adjusting parameter, so that the target image can be clearly shot, and the moving direction of a user is prompted through the judgment of the distance between the boundary of the target image area and the boundary of the preview image, so that the real-time tracking of the target image can be realized, the problems of unclear shooting and difficult tracking when a moving target object is shot are solved, and the user experience is improved.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having different letter suffixes may represent different examples of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed herein.
Fig. 1 is a first schematic flow chart of an information processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an information processing method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an example of a target image area display provided by an embodiment of the present invention;
FIG. 4 is a first exemplary diagram illustrating an enlarged target image area according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an example of location information of a target image area according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an example of a moving direction prompt according to an embodiment of the present invention;
fig. 7 is a third schematic flow chart of an information processing method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating examples of multi-target image region display according to an embodiment of the present invention;
FIG. 9 is a second exemplary enlarged target image region display diagram provided in accordance with an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Example one
An embodiment of the present invention provides an information processing method, as shown in fig. 1, the method including:
step 101, obtaining position information of a target image in a preview image.
It should be noted that the execution subject of the information processing method provided by the embodiment of the present invention is the information processing apparatus, that is, the information processing apparatus acquires the position information of the target image in the preview image, and the information processing apparatus may specifically be a device having a shooting function. For example, a mobile phone, a camera, a video camera, etc., having a photographing function.
Here, the photographing function may generally include: and (4) photographing and video recording functions. The shooting device comprises a preview area, a user can view images through the preview area when shooting or after shooting is completed, the preview area can display the images in the current view-finding range of the shooting device, and the images or videos can be played back after shooting is completed. The preview image in the embodiment of the invention is the image of the current view finding range of the shooting equipment in the preview area.
Specifically, in daily life, there are many objects photographed by a user as dynamic objects, such as athletes running on a sports field, stars on a stage at a concert, and small animals in motion, and the target image in the embodiment of the present invention may be understood as a dynamic object in photographing.
Specifically, in the shooting process, the information processing device detects whether an operation instruction of a user is received within a preset time, and if the operation instruction of the user is received within the preset time, the information processing device acquires a first target image in the preview image and position information of the first target image; if an operation instruction of a user is not received within a preset time, determining that all moving images in the preview image are second target images, and acquiring position information of the second target images, wherein the second target images comprise N target images, and N is a natural number greater than or equal to 1.
And 102, determining a target image area and parameter information of the target image area according to the position information of the target image.
Specifically, the information processing apparatus obtains position information of the first target image or the second target image, and the position information may be coordinate information.
In a possible implementation manner, the information processing apparatus determines a first target image area according to the position information of the first target image, where the first target image area is a minimum bounding rectangle of the first target image, and obtains first parameter information of the first target image area, where the first parameter information may include: coordinate information of the first target image area.
In a possible implementation manner, the information processing apparatus determines N target image areas according to position information of N target images, where each target image area in the N target image areas is a minimum circumscribed rectangle of each target image area; determining a second target image area according to the N target image areas; and acquiring second parameter information of the second target image area, wherein the second parameter information comprises coordinate information of the second target image area.
And 103, determining an adjusting parameter according to the parameter information of the target image area and the boundary parameter of the preview image.
Specifically, the information processing apparatus determines the values of the boundaries of the target image area, i.e., the length values of the four sides of the target image area, based on the coordinate information of the target image area. The boundary parameter of the preview image comprises the values of the boundary of the preview image, namely the length values of four edges of the preview image.
In one possible implementation manner, an information processing apparatus acquires a value of a first boundary and a value of a second boundary of the preview image, the first boundary being perpendicular to the second boundary; determining a value of a third boundary and a value of a fourth boundary of the target image area according to the parameter information of the target image area, wherein the target image area is rectangular, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary; the ratio of the value of the first boundary to the value of the third boundary is a third value, and the ratio of the value of the second boundary to the value of the fourth boundary is a fourth value; and determining the minimum value of the third numerical value and the fourth numerical value as the adjusting parameter.
And 104, adjusting the target image area according to the adjusting parameter.
Specifically, the information processing apparatus enlarges and displays the target image region in the preview image according to the adjustment parameter, and the position in the preview image where the target image region is not filled is filled with the blurred image or the shadow image.
And 105, when the distance between the boundary of the target image area and the boundary of the preview image meets a preset condition, prompting a moving direction to enable the target image area to be in the preview image.
Specifically, when the distance between a fifth boundary of the target image area and a sixth boundary of the preview image is less than or equal to a third preset value, a moving direction is prompted, the moving direction indicates to move to the sixth boundary, the fifth boundary is any one boundary of the target image area, the sixth boundary is parallel to the fifth boundary, and the distance between the sixth boundary and the fifth boundary is minimum.
Illustratively, setting the third preset value w equal to 1cm, a prompt arrow is displayed in the currently displayed target image area, and the direction of the arrow is to prompt the user to move the information processing apparatus in the direction of the arrow so that the target image area is within the preview image.
And if the target image area moves out of the view finding range, stopping shooting and displaying the preview image in the current view finding range.
According to the information processing method provided by the embodiment of the invention, the adjusting parameter can be determined according to the proportional relation between the preview image and the boundary parameter of the target image, and the target image is amplified and displayed through the adjusting parameter, so that the target image can be clearly shot, and the moving direction of a user is prompted through the judgment of the distance between the boundary of the target image area and the boundary of the preview image, so that the real-time tracking of the target image can be realized, the problems of unclear shooting and difficult tracking when a moving target object is shot are solved, and the user experience is improved.
Example two
An embodiment of the present invention provides an information processing method, as shown in fig. 2, the method includes:
step 201, the information processing apparatus acquires a preview image.
In the embodiment of the present invention, the information processing apparatus may specifically be a device having a shooting function. For example, a mobile phone, a camera, a video camera, etc., having a photographing function.
Specifically, when the shooting device is in an open state, an image in the current viewing range of the shooting device is a preview image.
Step 202, if an operation instruction of a user is received within a preset time, the information processing device obtains a first target image in the preview image and position information of the first target image.
Specifically, when the shooting starts, the information processing apparatus detects whether an operation instruction of the user is received within a preset time, where the preset time may be 5s or 10s, and may also be set according to a habit of the user, which is not limited in the embodiment of the present invention.
The operation instruction of the user may be: selecting a target image in the preview image by the operation of pressing the screen by the user; the following steps can be also included: the user may select the target image in the preview image through the keyboard, and may also perform other operations for selecting the target image in the preview image, which is not specifically limited in the embodiment of the present invention.
Specifically, if an operation instruction of a user is received within a preset time, a target image selected by the user is identified and tracked in a preview image by using a moving target tracking algorithm, and position information of the first target image is obtained, wherein the position information of the first target image includes coordinate information of the first target image.
The moving object tracking algorithm may be a background subtraction method, an optical flow method, a frame subtraction method, or the like.
Step 203, the information processing device determines the first target image area according to the position information of the first target image.
Specifically, the information processing apparatus may determine the position information of the first target image according to a moving object detection algorithm, so that the first target image area may be determined. For example, the information processing apparatus may determine the position information of the first target image according to a background difference between the color information of the first target image and the color information of the background image, and may also determine the contour information of the first target image, that is, the position information of the first target image, according to an inter-frame difference between each frame of image of the first target image, so that a minimum circumscribed rectangle containing the first target image, that is, the first target image region, may be determined by the contour information of the first target image.
Step 204, the information processing device acquires first parameter information of the first target image area.
Wherein the first parameter information includes coordinate information of the first target image area.
Step 205, the information processing apparatus obtains a value of a first boundary and a value of a second boundary of the preview image, determines a value of a third boundary and a value of a fourth boundary of the first target image area according to the first parameter information of the first target image area, where a ratio of the value of the first boundary to the value of the third boundary is a third value, a ratio of the value of the second boundary to the value of the fourth boundary is a fourth value, and determines a minimum value of the third value and the fourth value as the adjustment parameter.
Wherein the preview image is rectangular, and the first boundary is perpendicular to the second boundary.
For example, as shown in fig. 3, a rectangle formed by the boundary a, the boundary B, the boundary C, and the boundary D is a preview image area, and the preview image area can be understood as a shooting interface of the shooting device, so that the side length of the preview area is the side length of the shooting interface, and the side length of the shooting interface can be stored in the shooting device in advance, or can be directly obtained.
The first target image area is rectangular, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary.
Illustratively, as shown in fig. 3, a rectangle formed by a boundary a, a boundary b, a boundary c, and a boundary d is a first target image area, an intersection of the boundary a and the boundary b is a first coordinate, an intersection of the boundary a and the boundary d is a second coordinate, an intersection of the boundary d and the boundary c is a third coordinate, an intersection of the boundary c and the boundary b is a fourth coordinate, a length of the boundary a can be determined according to the first coordinate and the second coordinate, a length of the boundary d can be determined according to the second coordinate and the third coordinate, a length of the boundary c can be determined according to the third coordinate and the fourth coordinate, and a length of the boundary b can be determined according to the first coordinate and the fourth coordinate.
Illustratively, as shown in fig. 3, a rectangle composed of a boundary a, a boundary B, a boundary C, and a boundary D is a preview image area, a rectangle composed of a boundary a, a boundary B, a boundary C, and a boundary D is a first target image area, the boundary a and the boundary C are parallel to the boundary a, and the boundary a or the boundary C is a third boundary. The boundary B and the boundary d are parallel to the boundary B, so the boundary B or the boundary d is the fourth boundary. The opposite sides of the rectangle are parallel and equal, so the values of the boundary a and the boundary c are equal, and the values of the boundary b and the boundary d are equal. The ratio of the boundary a to the boundary a is r1, the ratio of the boundary B to the boundary B is r2, and the adjustment parameter r is the minimum value of r1 and r2, which is expressed as r ═ min { r1, r2 }.
And step 206, the information processing device enlarges and displays the first target image area according to the adjusting parameter.
Specifically, the information processing device enlarges and displays the first target image area in the preview image according to the adjustment parameter, fills the position of the preview image, which is not filled with the target image area, with the blurred image or the shadow image, and photographs or records the enlarged and displayed first target image area.
Illustratively, as shown in fig. 4, the inner rectangular region is the enlarged first target image region, and the region not filled with the enlarged first target image region is filled with a blurred image or a shadow image, i.e., the shadow regions on both sides of the enlarged first target image region in fig. 4.
And step 207, when the distance between the fifth boundary of the first target image area and the sixth boundary of the preview image is smaller than or equal to a third preset value, the information processing device prompts a moving direction, and the moving direction indicates to move to the sixth boundary.
The fifth boundary is any one of the boundaries of the first target image area, the sixth boundary is parallel to the fifth boundary, and the distance between the sixth boundary and the fifth boundary is the minimum.
Optionally, after the information processing apparatus prompts the moving direction, if the user does not move according to the moving direction, the first target image area will move out of the preview image area due to the movement of the first target image, and then the preview image is returned.
In one possible embodiment, the third preset value is a value, and when a distance between a fifth boundary of the first target image area and a sixth boundary of the preview image is less than or equal to the third preset value, the information processing apparatus prompts a moving direction indicating movement to the sixth boundary.
The third preset value may be 1cm, 2cm or other values, which is not specifically limited in the embodiment of the present invention.
Illustratively, as shown in fig. 5, the distance between the boundary a and the boundary a is top, the distance between the boundary B and the boundary B is left, the distance between the boundary C and the boundary C is bottom, and the distance between the boundary D and the boundary D is right. the values of top, left, bottom, right may be obtained from the coordinate information of the first target image area. If the third preset value is 1cm, when the left value is less than or equal to 1cm, displaying an arrow moving to the left on the current picture, as shown in fig. 6. And after seeing the prompt, the user moves the shooting equipment to the direction shown by the arrow, so that the first target image area is in the preview image, if the user does not move the shooting equipment to the direction shown by the arrow, the first target image area moves out of the preview image area, and at the moment, the shooting equipment returns to the preview image.
In a possible embodiment, the third preset value may be four different preset values, which are respectively the first third preset value, the second third preset value, the third preset value and the fourth third preset value, and the four different preset values respectively correspond to four boundaries of the first target image.
The preview image is rectangular, four sides of the preview image are respectively a first side, a second side, a third side and a fourth side, the first side is perpendicular to the second side and the fourth side, and the third side is perpendicular to the second side and the fourth side. The first target image area is within the preview image. The first target image area is rectangular, four sides of the first target image area are respectively a fifth side, a sixth side, a seventh side and an eighth side, the fifth side is parallel to the first side and the third side, and the distance between the fifth side and the first side is smaller than the distance between the fifth side and the third side. The distance between the fifth edge and the first edge is the first distance. The sixth side is parallel to the second side and the fourth side, and the distance between the sixth side and the second side is less than the distance between the sixth side and the fourth side. The distance between the sixth side and the second side is the second distance. The seventh side is parallel to the first side and the third side, and the distance between the seventh side and the third side is smaller than the distance between the seventh side and the first side. The distance between the seventh side and the third side is the third distance. The eighth edge is parallel to the second edge and the fourth edge, and the distance between the eighth edge and the fourth edge is smaller than the distance between the eighth edge and the second edge. The distance between the eighth side and the fourth side is a fourth distance.
When the first distance is smaller than or equal to a first third preset value, prompting a moving direction, wherein the moving direction indicates to move to the first edge;
when the second distance is smaller than or equal to a second third preset value, prompting a moving direction, wherein the moving direction indicates to move to the second edge;
when the third distance is smaller than or equal to a third preset value, prompting a moving direction, wherein the moving direction indicates to move to the third side;
and when the fourth distance is smaller than or equal to a fourth preset value, prompting a moving direction, wherein the moving direction indicates that the fourth side moves.
Illustratively, as shown in fig. 5, the boundary a is a first side, the boundary B is a second side, the boundary C is a third side, and the boundary D is a fourth side; the boundary a is a fifth side, the boundary b is a sixth side, the boundary c is a seventh side, and the boundary d is an eighth side; the distance between the boundary A and the boundary a is a first distance and is marked as top; the distance between the boundary B and the boundary B is a second distance and is marked as left; the distance between the boundary C and the boundary C is a third distance and is marked as bottom; the distance between the boundary D and the boundary D is a fourth distance, which is denoted as right. The preset value of the distance between the boundary A and the boundary a is set to be 1cm, the preset value of the distance between the boundary B and the boundary B is set to be 1.2cm, the preset value of the distance between the boundary C and the boundary C is set to be 0.9cm, and the preset value of the distance between the boundary D and the boundary D is set to be 1.1 cm. If the left value is less than or equal to 1.2cm, an arrow moving to the left is prompted in the preview area, and the user is prompted to move the viewing range of the shooting device to the left, as shown in fig. 6. Therefore, the first target image area is within the preview image, and if the first target image area moves out of the preview image, the shooting is stopped, and the preview image within the current viewing range is displayed.
According to the information processing method provided by the embodiment of the invention, the adjusting parameter can be determined according to the proportional relation between the preview image and the boundary of the first target image, and the first target image is amplified and displayed through the adjusting parameter, so that the first target image can be clearly shot, and the moving direction of a user is prompted through the judgment of the distance between the boundary of the first target image area and the boundary of the preview image, so that the real-time tracking of the first target image can be realized, the problems of unclear shooting and difficult tracking when a moving target object is shot are solved, and the user experience is improved.
EXAMPLE III
An embodiment of the present invention provides an information processing method, as shown in fig. 7, the method including:
in step 301, the information processing apparatus acquires a preview image.
Specifically, the information processing apparatus may be a device having a shooting function. For example, a mobile phone, a camera, a video camera, etc., having a photographing function. When the shooting device is in an open state, the shooting device can use the image in the current view range as a preview image.
Step 302, if no operation instruction of the user is received within a preset time, the information processing apparatus determines that all the moving images in the preview image are second target images, and acquires position information of the second target images.
Specifically, when the shooting starts, the information processing apparatus detects whether an operation instruction of the user is received within a preset time, where the preset time may be 5s or 10s, and may also be set according to a habit of the user, which is not limited in the embodiment of the present invention.
Specifically, if no operation instruction of the user is received within the preset time, the information processing apparatus identifies all moving images in the preview image as the second target image using a multiple moving object detection algorithm, and acquires position information of the second target image. The multi-moving-object detection algorithm comprises an optical flow method, an interframe difference method, background subtraction, a Kalman filtering estimation algorithm and the like.
The second target image comprises N target images, wherein N is a natural number which is greater than or equal to 1.
Illustratively, as shown in fig. 8, the information processing apparatus determines all of the moving images a1, a2, A3, a4, a5 in the preview image as the second target image.
Step 303, the information processing device determines N target image areas according to the position information of the N target images.
Specifically, the information processing apparatus may determine the position information of each of the N target images according to a moving object detection algorithm, thereby determining the target image area according to the position information of the target image. Each target image area in the N target image areas is the minimum circumscribed rectangle of each target image.
Illustratively, as shown in fig. 8, the circumscribed rectangular region of the target image a1 in the preview image is the target image region of a1 and is denoted as region a1, the circumscribed rectangular region of the target image a2 is the target image region of a2 and is denoted as region a2, the circumscribed rectangular region of the target image A3 is the target image region of A3 and is denoted as region A3, the circumscribed rectangular region of the target image a4 is the target image region of a4 and is denoted as region a4, and the circumscribed rectangular region of the target image a5 is the target image region of a5 and is denoted as region a 5.
Step 304, the information processing device acquires N areas of the N target image areas, and determines a third target image area and a fourth target image area.
Wherein the third target image region is a target image region of a largest area among the N areas, and the fourth target image region includes: a target image area other than the third target image area.
Illustratively, as shown in fig. 8, areas of the region a1, the region a2, the region A3, the region a4 and the region a5 are SA1, SA2, SA3, SA4 and SA5, the largest area is the region a1, the region a1 is a third target image region, the fourth target image region is a region other than the region a1, and the target image regions in the fourth target image region include: region a2, region A3, region a4, and region a 5.
Step 305, the information processing apparatus determines that the ratio of the area of each target image region in the fourth target image region to the maximum area is a first value, and determines that the distance between the center position of each target image region in the fourth target image region and the center position of the third target image region is a second value.
Illustratively, as shown in fig. 8, the area ratio of the region a2 to the region a1 is S1, the area ratio of the region A3 to the region a1 is S2, the area ratio of the region a4 to the region a1 is S3, and the area ratio of the region a5 to the region a1 is S4; the distance between the center position of the region a2 and the center position of the region a1 is L1, the distance between the center position of the region A3 and the center position of the region a1 is L2, the distance between the center position of the region a4 and the center position of the region a1 is L3, and the distance between the center position of the region a5 and the center position of the region a1 is L4.
Step 306, the information processing apparatus determines that the target image area where the first value is greater than or equal to the first preset value and the second value is less than or equal to the second preset value is a fifth target image area.
Illustratively, the first preset value is set as S, and the second preset value is set as L. As shown in fig. 8, the area ratios of the area a2, the area A3, the area a4, the area a5 and the area a1 are S1, S2, S3 and S4 in sequence, S1, S2 and S3 are all greater than a first preset value S, and S4 is smaller than the first preset value S; distances between the center positions of the region a2, the region A3, the region a4 and the region a5 and the center position of the region a1 are sequentially L1, L2, L3 and L4, values of L1 and L2 are all smaller than a second preset value L, and values of L3 and L4 are larger than the second preset value L, so that it can be determined that the target image region in the fifth target image region includes: region a1, region a2, and region A3.
Step 307, the information processing apparatus determines that the minimum bounding rectangle of the fifth target image area is the second target image area.
Illustratively, as shown in fig. 8, the second target image region is a minimum bounding rectangle containing a region a1, a region a2, and a region A3.
Step 308, the information processing device obtains second parameter information of the second target image area.
Wherein the second parameter information includes coordinate information of the second target image area.
Step 309, the information processing apparatus obtains a value of a first boundary and a value of a second boundary of the preview image, determines a value of a third boundary and a value of a fourth boundary of the second target image area according to the first parameter information of the second target image area, where a ratio of the value of the first boundary to the value of the third boundary is a third value, a ratio of the value of the second boundary to the value of the fourth boundary is a fourth value, and determines a minimum value of the third value and the fourth value as the adjustment parameter.
The preview image is rectangular, the first boundary is perpendicular to the second boundary, the second target image area is rectangular, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary.
Illustratively, as shown in fig. 8, a rectangle composed of a boundary a, a boundary B, a boundary C, and a boundary D is the preview image area, a rectangle composed of a boundary a, a boundary B, a boundary C, and a boundary D is the second target image area, the boundary a and the boundary C are parallel to the boundary a, and the boundary a or the boundary C is the third boundary. The boundary B and the boundary d are parallel to the boundary B, and the boundary B or the boundary d is the fourth boundary. The opposite sides of the rectangle are parallel and equal, so the values of the boundary a and the boundary c are equal, and the values of the boundary b and the boundary d are equal. The ratio of the boundary a to the boundary a is r1, the ratio of the boundary B to the boundary B is r2, and the adjustment parameter r is the minimum value of r1 and r2, which is expressed as r ═ min { r1, r2 }.
And step 310, the information processing device enlarges and displays the second target image area according to the adjusting parameter.
Specifically, the information processing apparatus displays the second target image region in an enlarged manner in the preview image according to the adjustment parameter, and the position in the preview image where the target image region is not filled is filled with the blurred image or the shadow image.
Illustratively, as shown in fig. 9, the inner rectangular region is the enlarged second target image region, and the region not filled with the enlarged second target image region is filled with a blurred image, i.e., the shaded regions on both sides of the enlarged second target image region in fig. 9.
Step 311, when the distance between the fifth boundary of the second target image area and the sixth boundary of the preview image is less than or equal to a third preset value, the information processing apparatus prompts a moving direction, and the moving direction indicates to move to the sixth boundary.
The fifth boundary is any one of the boundaries of the second target image area, the sixth boundary is parallel to the fifth boundary, and the distance between the sixth boundary and the fifth boundary is the minimum.
Optionally, after the information processing apparatus prompts the moving direction, if the user does not move according to the moving direction, the second target image area will move out of the preview image area due to the movement of the second target image, and then the preview image is returned.
Here, the description of step 311 may refer to the description of step 207, which is not repeated herein in this embodiment of the present invention.
According to the information processing method provided by the embodiment of the invention, the adjusting parameter can be determined according to the proportional relation between the boundary parameters of the preview image and the second target image, and the second target image is amplified and displayed through the adjusting parameter, so that the second target image can be clearly shot, and the moving direction of a user is prompted through the judgment of the distance between the boundary of the second target image area and the boundary of the preview image, therefore, the real-time tracking of the second target image can be realized, the problems of unclear shooting and difficult tracking when a moving target object is shot are solved, and the user experience is improved.
Example four
An embodiment of the present invention provides an information processing apparatus, and as shown in fig. 10, the information processing apparatus 1 includes: the device comprises an acquisition module 10, a determination module 11, a processing module 12 and a prompt module 13; wherein the content of the first and second substances,
the acquiring module 10 is configured to acquire a position of a target image in a preview image;
the determining module 11 is configured to determine a target image area and parameter information of the target image area according to the position information of the target image; the image previewing device is also used for determining an adjusting parameter according to the parameter information of the target image area and the boundary parameter of the preview image;
the processing module 12 is configured to adjust the target image area according to the adjustment parameter;
the prompting module 13 is configured to prompt a moving direction when a distance between the boundary of the target image area and the boundary of the preview image satisfies a preset condition, so that the target image area is in the preview image.
Further, the obtaining module 10 is specifically configured to receive an operation instruction of a user within a preset time, and obtain a first target image in the preview image and position information of the first target image;
further, the determining module 11 is specifically configured to determine that all the moving images in the preview image are the second target images if the operation instruction of the user is not received within the preset time,
further, the obtaining module 10 is further configured to obtain position information of the second target image, where the second target image includes N target images, and N is a natural number greater than or equal to 1.
Further, the determining module 11 is further configured to determine the first target image area according to the position information of the first target image, where the first target image area is a minimum circumscribed rectangle of the first target image;
further, the obtaining module 10 is further configured to obtain first parameter information of the first target image area, where the first parameter information includes: coordinate information of the first target image area.
Further, the determining module 11 is configured to determine N target image areas according to position information of the N target images, where each target image area in the N target image areas is a minimum circumscribed rectangle of each target image area; and the image processing device is also used for determining a second target image area according to the N target image areas.
Further, the obtaining module 10 is configured to obtain second parameter information of the second target image area, where the second parameter information includes coordinate information of the second target image area.
Further, the obtaining module 10 is configured to obtain N areas of the N target image regions.
Further, the determining module 11 is configured to determine a third target image region and a fourth target image region, where the third target image region is a target image region with a largest area among the N areas, and the fourth target image region includes: a target image area other than the third target image area; the distance determination module is further configured to determine that a ratio of an area of each target image region in the fourth target image region to the maximum area is a first value, and determine that a distance between a center position of each target image region in the fourth target image region and a center position of the third target image region is a second value; the image processing device is also used for determining that the first numerical value is greater than or equal to a first preset value, and the target image area of which the second numerical value is less than or equal to a second preset value is a fifth target image area; and the minimum circumscribed rectangle of the fifth target image area is also used for determining the second target image area.
Further, the obtaining module 10 is configured to obtain a value of a first boundary and a value of a second boundary of the preview image, where the preview image is rectangular, the first boundary is perpendicular to the second boundary, and the value of the first boundary is greater than or equal to the value of the second boundary.
Further, the determining module 11 is configured to determine a value of a third boundary and a value of a fourth boundary of the target image area according to the parameter information of the target image area, where the target image area is a rectangle, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary; the ratio of the value of the first boundary to the value of the third boundary is a third value, and the ratio of the value of the second boundary to the value of the fourth boundary is a fourth value; and is further configured to determine a smallest value of the third and fourth values as the adjustment parameter.
Further, the processing module 12 is configured to enlarge and display the target image area according to the adjustment parameter.
Further, the prompting module 13 is configured to prompt a moving direction when a distance between a fifth boundary of the target image area and a sixth boundary of the preview image is less than or equal to a third preset value, where the moving direction indicates to move to the sixth boundary, the fifth boundary is any one boundary of the target image area, the sixth boundary is parallel to the fifth boundary, and a distance between the sixth boundary and the fifth boundary is minimum.
Further, the obtaining module 10 is further configured to obtain the preview image.
Specifically, for understanding of the information processing apparatus provided in the embodiment of the present invention, reference may be made to the descriptions of the information processing methods in the first to third embodiments, and details of this embodiment are not repeated herein.
The information processing device provided by the embodiment of the invention can determine the adjusting parameter according to the proportional relation between the preview image and the boundary parameter of the target image, and magnifies and displays the target image through the adjusting parameter, so that the target image can be clearly shot, and the moving direction of a user is prompted through the judgment of the distance between the boundary of the target image area and the boundary of the preview image, so that the real-time tracking of the target image can be realized, the problems of unclear shooting and difficult tracking when a moving target object is shot are solved, and the user experience is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (12)

1. An information processing method, characterized in that the method comprises:
acquiring a target image in a preview image by using a moving target tracking algorithm, and determining the outline information of the target image according to the frame-to-frame difference of each frame image of the target image;
determining a target image area and parameter information of the target image area according to the contour information of the target image;
determining an adjusting parameter according to the parameter information of the target image area and the boundary parameter of the preview image;
adjusting the target image area according to the adjusting parameter;
when the distance between the boundary of the target image area and the boundary of the preview image meets a preset condition, prompting a moving direction to enable the target image area to be in the preview image;
the method for obtaining the target image in the preview image by using the moving target tracking algorithm and determining the outline information of the target image according to the frame-to-frame difference of each frame image of the target image comprises the following steps:
if an operation instruction of a user is received within preset time, acquiring a first target image in the preview image and contour information of the first target image;
if an operation instruction of a user is not received within a preset time, determining that all moving images in the preview image are second target images, and acquiring contour information of the second target images, wherein the second target images comprise N target images, and N is a natural number greater than or equal to 1;
determining an adjustment parameter according to the parameter information of the target image area and the boundary parameter of the preview image, wherein the determining comprises the following steps:
acquiring a value of a first boundary and a value of a second boundary of the preview image, wherein the preview image is rectangular, and the first boundary is vertical to the second boundary;
determining a value of a third boundary and a value of a fourth boundary of the target image area according to the parameter information of the target image area, wherein the target image area is rectangular, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary;
the ratio of the value of the first boundary to the value of the third boundary is a third value, and the ratio of the value of the second boundary to the value of the fourth boundary is a fourth value;
determining the smallest value of the third numerical value and the fourth numerical value as the adjusting parameter;
the adjusting the target image area according to the adjustment parameter includes:
and amplifying and displaying the target image area according to the adjusting parameter.
2. The method of claim 1, wherein determining a target image area and parameter information of the target image area according to the contour information of the target image comprises:
determining the first target image area according to the contour information of the first target image, wherein the first target image area is the minimum circumscribed rectangle of the first target image;
acquiring first parameter information of the first target image area, wherein the first parameter information comprises: coordinate information of the first target image area.
3. The method of claim 1, wherein determining a target image area and parameter information of the target image area according to the contour information of the target image comprises:
determining N target image areas according to the contour information of the N target images, wherein each target image area in the N target image areas is the minimum circumscribed rectangle of each target image area;
determining a second target image area according to the N target image areas;
and acquiring second parameter information of the second target image area, wherein the second parameter information comprises coordinate information of the second target image area.
4. The method of claim 3, wherein determining a second target image region from the N target image regions comprises:
acquiring N areas of the N target image areas, and determining a third target image area and a fourth target image area, wherein the third target image area is a target image area with the largest area in the N areas, and the fourth target image area comprises: a target image area other than the third target image area;
determining that the ratio of the area of each target image region in the fourth target image region to the maximum area is a first numerical value, and determining that the distance between the center position of each target image region in the fourth target image region and the center position of the third target image region is a second numerical value;
determining that the fourth target image area and the third target image area, of which the first numerical value is greater than or equal to a first preset value and the second numerical value is less than or equal to a second preset value, are fifth target image areas;
and determining the minimum circumscribed rectangle of the fifth target image area as a second target image area.
5. The method according to claim 1, wherein when the distance between the boundary of the target image area and the boundary of the preview image satisfies a preset condition, prompting a moving direction comprises:
and when the distance between a fifth boundary of the target image area and a sixth boundary of the preview image is smaller than or equal to a third preset value, prompting a moving direction, wherein the moving direction indicates to move to the sixth boundary, the fifth boundary is any one boundary of the target image area, the sixth boundary is parallel to the fifth boundary, and the distance between the sixth boundary and the fifth boundary is minimum.
6. The method of claim 1, further comprising, before the obtaining the contour information of the target image in the preview image:
and acquiring the preview image.
7. An information processing apparatus characterized in that the apparatus comprises: the device comprises an acquisition module, a determination module, a processing module and a prompt module; wherein the content of the first and second substances,
the acquisition module is used for acquiring a target image in a preview image by using a moving target tracking algorithm and determining the outline information of the target image according to the frame-to-frame difference of each frame image of the target image;
the determining module is used for determining a target image area and parameter information of the target image area according to the contour information of the target image; the image previewing device is also used for determining an adjusting parameter according to the parameter information of the target image area and the boundary parameter of the preview image;
the processing module is used for adjusting the target image area according to the adjusting parameter;
the prompting module is used for prompting a moving direction when the distance between the boundary of the target image area and the boundary of the preview image meets a preset condition so as to enable the target image area to be in the preview image;
the acquisition module is used for receiving an operation instruction of a user within preset time and acquiring a first target image in the preview image and contour information of the first target image;
the determining module is used for determining all the moving images in the preview image as second target images if the operation instruction of the user is not received within the preset time;
the acquisition module is further configured to acquire contour information of the second target image, where the second target image includes N target images, and N is a natural number greater than or equal to 1;
the acquisition module is configured to acquire a value of a first boundary and a value of a second boundary of the preview image, where the preview image is rectangular, the first boundary is perpendicular to the second boundary, and the value of the first boundary is greater than or equal to the value of the second boundary;
the determining module is configured to determine a value of a third boundary and a value of a fourth boundary of the target image area according to parameter information of the target image area, where the target image area is rectangular, the third boundary is parallel to the first boundary, and the fourth boundary is parallel to the second boundary; the ratio of the value of the first boundary to the value of the third boundary is a third value, and the ratio of the value of the second boundary to the value of the fourth boundary is a fourth value; the third value and the fourth value are used for determining the minimum value as the adjusting parameter;
and the processing module is used for amplifying and displaying the target image area according to the adjusting parameter.
8. The apparatus of claim 7,
the determining module is configured to determine the first target image area according to the contour information of the first target image, where the first target image area is a minimum circumscribed rectangle of the first target image;
the acquiring module is configured to acquire first parameter information of the first target image area, where the first parameter information includes: coordinate information of the first target image area.
9. The apparatus of claim 7,
the determining module is used for determining N target image areas according to the contour information of the N target images, wherein each target image area in the N target image areas is the minimum circumscribed rectangle of each target image area; the image processing device is also used for determining a second target image area according to the N target image areas;
the obtaining module is configured to obtain second parameter information of the second target image area, where the second parameter information includes coordinate information of the second target image area.
10. The apparatus of claim 9,
the acquisition module is used for acquiring N areas of the N target image areas;
the determining module is configured to determine a third target image region and a fourth target image region, where the third target image region is a target image region with a largest area among the N areas, and the fourth target image region includes: a target image area other than the third target image area; the distance determination module is further configured to determine that a ratio of an area of each target image region in the fourth target image region to the maximum area is a first value, and determine that a distance between a center position of each target image region in the fourth target image region and a center position of the third target image region is a second value; the fourth target image area and the third target image area, of which the first numerical value is greater than or equal to a first preset value and the second numerical value is less than or equal to a second preset value, are also determined to be a fifth target image area; and the minimum circumscribed rectangle of the fifth target image area is also used for determining the second target image area.
11. The apparatus of claim 7, wherein the prompting module is configured to prompt a moving direction when a distance between a fifth boundary of the target image area and a sixth boundary of the preview image is smaller than or equal to a third preset value, the moving direction indicates moving to the sixth boundary, the fifth boundary is any boundary of the target image area, the sixth boundary is parallel to the fifth boundary, and a distance between the sixth boundary and the fifth boundary is minimum.
12. The apparatus of claim 7, wherein the obtaining module is further configured to obtain the preview image.
CN201610578837.3A 2016-07-21 2016-07-21 Information processing method and device Active CN107645628B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610578837.3A CN107645628B (en) 2016-07-21 2016-07-21 Information processing method and device
PCT/CN2017/000084 WO2018014517A1 (en) 2016-07-21 2017-01-03 Information processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610578837.3A CN107645628B (en) 2016-07-21 2016-07-21 Information processing method and device

Publications (2)

Publication Number Publication Date
CN107645628A CN107645628A (en) 2018-01-30
CN107645628B true CN107645628B (en) 2021-08-06

Family

ID=60991766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610578837.3A Active CN107645628B (en) 2016-07-21 2016-07-21 Information processing method and device

Country Status (2)

Country Link
CN (1) CN107645628B (en)
WO (1) WO2018014517A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111273837A (en) * 2018-12-04 2020-06-12 浙江宇视科技有限公司 Image processing method and device
CN110213611A (en) * 2019-06-25 2019-09-06 宫珉 A kind of ball competition field camera shooting implementation method based on artificial intelligence Visual identification technology
CN111163261B (en) * 2019-12-25 2022-03-01 上海肇观电子科技有限公司 Target detection method, circuit, visual impairment assistance device, electronic device, and medium
CN113554725A (en) * 2020-04-24 2021-10-26 西安诺瓦星云科技股份有限公司 Multi-pattern moving adsorption method and device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965645B2 (en) * 2001-09-25 2005-11-15 Microsoft Corporation Content-based characterization of video frame sequences
JP5159515B2 (en) * 2008-08-26 2013-03-06 キヤノン株式会社 Image processing apparatus and control method thereof
CN102014248B (en) * 2009-09-04 2012-07-04 华晶科技股份有限公司 Auto-focusing method and module and image pick-up device employing the method
US9113130B2 (en) * 2012-02-06 2015-08-18 Legend3D, Inc. Multi-stage production pipeline system
JP2013153429A (en) * 2011-12-27 2013-08-08 Canon Inc Image processing apparatus, image display system, image processing method and image processing program
CN103780747B (en) * 2012-10-23 2017-05-24 联想(北京)有限公司 Information processing method and electronic equipment
CN103813075A (en) * 2012-11-07 2014-05-21 联想(北京)有限公司 Reminding method and electronic device
CN103019537B (en) * 2012-11-19 2016-04-13 广东欧珀移动通信有限公司 A kind of image preview method and device
CN103139480A (en) * 2013-02-28 2013-06-05 华为终端有限公司 Image acquisition method and image acquisition device
CN104469121B (en) * 2013-09-16 2018-08-10 联想(北京)有限公司 Information processing method and electronic equipment
KR102119659B1 (en) * 2013-09-23 2020-06-08 엘지전자 주식회사 Display device and control method thereof
CN104008129B (en) * 2014-04-25 2017-03-15 小米科技有限责任公司 Position information processing method, device and terminal
CN105635569B (en) * 2015-12-26 2019-03-22 宇龙计算机通信科技(深圳)有限公司 Image pickup method, device and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频信号的目标跟踪技术综述;王嵚 等;《中国体视学与图像分析》;20160325;全文 *

Also Published As

Publication number Publication date
WO2018014517A1 (en) 2018-01-25
CN107645628A (en) 2018-01-30

Similar Documents

Publication Publication Date Title
US8988529B2 (en) Target tracking apparatus, image tracking apparatus, methods of controlling operation of same, and digital camera
US9300947B2 (en) Producing 3D images from captured 2D video
US10762653B2 (en) Generation apparatus of virtual viewpoint image, generation method, and storage medium
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
US20190089910A1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN107645628B (en) Information processing method and device
CN111557016A (en) Motion blur simulation
US20080199043A1 (en) Image Enhancement in Sports Recordings
US20230040548A1 (en) Panorama video editing method,apparatus,device and storage medium
US9667887B2 (en) Lens distortion method for broadcast video
JP2015180062A (en) Method for processing video sequence and device for processing video sequence
US11770497B2 (en) Method and device for processing video, and storage medium
US9215368B2 (en) Virtual decals for precision alignment and stabilization of motion graphics on mobile video
US20050018066A1 (en) Method and apparatus for setting a marker on an object and tracking the position of the object
CN107231524A (en) Image pickup method and device, computer installation and computer-readable recording medium
CN102158648A (en) Image capturing device and image processing method
CN111787354B (en) Video generation method and device
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114520877A (en) Video recording method and device and electronic equipment
JP2020522943A (en) Slow motion video capture based on object tracking
CN105657262B (en) A kind of image processing method and device
CN107392850B (en) Image processing method and system
CN112565604A (en) Video recording method and device and electronic equipment
CN115589532A (en) Anti-shake processing method and device, electronic equipment and readable storage medium
CN106488128B (en) Automatic photographing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant