CN114125567B - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN114125567B
CN114125567B CN202010879120.9A CN202010879120A CN114125567B CN 114125567 B CN114125567 B CN 114125567B CN 202010879120 A CN202010879120 A CN 202010879120A CN 114125567 B CN114125567 B CN 114125567B
Authority
CN
China
Prior art keywords
image
displayed
display area
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010879120.9A
Other languages
Chinese (zh)
Other versions
CN114125567A (en
Inventor
刘锴
徐蓓
曹雄伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202010879120.9A priority Critical patent/CN114125567B/en
Publication of CN114125567A publication Critical patent/CN114125567A/en
Application granted granted Critical
Publication of CN114125567B publication Critical patent/CN114125567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N7/144Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an image processing method and a related device, wherein the method comprises the following steps: acquiring display area information of a target terminal and acquiring position information of M key points in an image to be displayed; determining a target display area in the image to be displayed according to the display area information and the position information; and sending the image in the target display area to the target terminal, processing the image according to the display area information of the terminal to obtain the display area in the image to be displayed, and displaying the image in the display area, so that the practicability of the terminal in image display is improved.

Description

Image processing method and related device
Technical Field
The present invention relates to the field of computers, and in particular, to an image processing method and related apparatus.
Background
The terminal on the current market has various forms: cell-phone, large-size screen, intelligent wrist-watch, intelligent pocket-watch etc.. The interconnection technology between multiple terminals is gradually developing. Among terminals of different forms, there may be a need for sharing media resources, such as picture viewing, wallpaper synchronization, remote control photographing, video chat, and the like. At this time, if the screen shapes between the two terminals are similar, for example, the mobile phone and the large screen (both are rectangular), when the resources are shared, a consistent rendering effect can be achieved through proper scaling. If the screen form difference between different terminals is large (for example, a smart watch with a rectangular mobile phone screen and a circular dial, or a smart pocket watch with a rectangular mobile phone screen and a heart-shaped screen), due to the difference in hardware, the proportion scaling of simple violence usually adopted in the existing scheme causes black frames to appear on the screen during imaging, and the practicability is low.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a related device, which can process an image according to display area information of a terminal to obtain a display area in the image to be displayed, and display the image in the display area, so that the practicability of terminal imaging is improved.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
acquiring display area information of a target terminal and acquiring position information of M key points in an image to be displayed;
determining a target display area in the image to be displayed according to the display area information and the position information;
and sending the image in the target display area to the target terminal.
In this example, according to the display area information of the target terminal and the position information of the key point in the image to be displayed, the target display area of the image to be displayed on the target terminal is determined, and the image of the target display area is sent to the target terminal, so that the image in the target display area can be displayed, and compared with the prior art that the image is displayed in a direct scaling mode, the practicability of displaying the content in the image to be displayed can be improved.
With reference to the first aspect, in a possible implementation manner, determining a target display area in an image to be displayed according to display area information and position information includes:
determining first characteristic information of an image cutting window according to the image to be displayed and the display area information;
determining N first areas according to first characteristic information of an image cropping window, wherein the first areas comprise M key points;
obtaining a distance value between the center of each first region in the N first regions and the M key points to obtain a distance value set corresponding to each first region in the N first regions;
determining a second region according to the distance value set corresponding to each of the N first regions;
and determining the overlapping area of the second area and the image to be displayed as a target display area.
In this example, a distance value set is obtained through a distance value between the center of each first region and a key point, a second region is determined according to the distance value set, an overlapping region of the second region and an image to be displayed is determined as a target display region, the second region which meets a rule can be determined from the plurality of first regions, so that the precision of the display region is improved, the overlapping region of the second region and the image to be displayed is determined as the target display region, and the accuracy of determining the target display region is further improved.
With reference to the first aspect, in a possible implementation manner, the determining N first areas according to the first feature information of the image cropping window includes:
acquiring a first straight line, wherein the first straight line is a symmetry axis of an image to be displayed;
and carrying out sliding window scanning on the geometric center of the image cutting window on a first straight line according to a preset step length to obtain N first areas.
The N first areas are determined in a sliding window scanning mode, and the efficiency of acquiring the first areas can be improved.
With reference to the first aspect, in a possible implementation manner, the determining N first areas according to the first feature information of the image cropping window includes:
acquiring a reference point in the image to be displayed, wherein the reference point is an intersection point of a first straight line and the boundary of the image to be displayed, and the first straight line is a symmetry axis of the image to be displayed;
obtaining distance values between the reference point and the M key points to obtain M first distance values;
acquiring a target key point, wherein the target key point is a key point corresponding to the maximum value in the M first distance values;
and determining N first areas according to the reference point, the target key point, the shape information and the first straight line.
The accuracy of the first region acquisition can be improved by acquiring the reference points in the image to be displayed, determining the target key points according to the first distance values between the reference points and the key points, and determining the N first regions according to the key points.
With reference to the first aspect, in a possible implementation manner, determining a target display area in an image to be displayed according to display area information and position information includes:
scaling the image to be displayed according to the position information to obtain a first image, wherein the first image is internally tangent to a display frame graph corresponding to the display area information;
and determining the area where the first image is located as a target display area.
The image to be displayed is zoomed, the first image is determined afterwards, the area where the first image is located is determined as the target display area, all the images to be displayed can be displayed, and accuracy is improved.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the display area display device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring display area information of a target terminal and acquiring position information of M key points in an image to be displayed;
the determining unit is used for determining a target display area in the image to be displayed according to the display area information and the position information;
and the sending unit is used for sending the image in the target display area to the target terminal.
With reference to the second aspect, in a possible implementation manner, the determining unit is configured to:
determining first characteristic information of an image cutting window according to the image to be displayed and the display area information;
determining N first areas according to first characteristic information of an image cropping window, wherein the first areas comprise M key points;
obtaining a distance value between the center of each first region in the N first regions and the M key points to obtain a distance value set corresponding to each first region in the N first regions;
determining a second region according to the distance value set corresponding to each of the N first regions;
and determining the overlapping area of the second area and the image to be displayed as a target display area.
With reference to the first aspect, in a possible implementation manner, the first feature information includes a geometric center, and in determining the N first areas according to the first feature information of the image cropping window, the determining unit is configured to:
acquiring a first straight line, wherein the first straight line is a symmetry axis of an image to be displayed;
and carrying out sliding window scanning on the geometric center of the image cutting window on a first straight line according to a preset step length to obtain N first areas.
With reference to the first aspect, in a possible implementation manner, in an aspect that the first feature information includes shape information and the N first areas are determined according to the first feature information of the image cropping window, the determining unit is configured to:
acquiring a reference point in the image to be displayed, wherein the reference point is an intersection point of a first straight line and the boundary of the image to be displayed, and the first straight line is a symmetry axis of the image to be displayed;
obtaining distance values between the reference point and the M key points to obtain M first distance values;
acquiring a target key point, wherein the target key point is a key point corresponding to the maximum value of the M first distance values;
and determining N first areas according to the reference point, the target key point, the shape information and the first straight line.
With reference to the first aspect, in a possible implementation manner, the determining unit is configured to:
scaling the image to be displayed according to the position information to obtain a first image, wherein the first image is internally tangent to a display frame graph corresponding to the display area information;
and determining the area where the first image is located as a target display area.
In a third aspect, an embodiment of the present invention provides an image processing apparatus, including:
a memory to store instructions; and
at least one processor coupled to the memory;
wherein the instructions, when executed by the at least one processor, cause the processor to perform all or part of the method of the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing a computer program, the computer program comprising program instructions, which, when executed by a processor, cause the processor to perform all or part of the method as set forth in the first aspect.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an application scenario of an image processing method according to an embodiment of the present application;
fig. 2A is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2B is a schematic diagram of an image cropping window according to an embodiment of the present application;
FIG. 2C is a schematic diagram of a first region according to an embodiment of the present application;
FIG. 2D is a schematic diagram of a sliding window scan according to an embodiment of the present disclosure;
FIG. 2E is a schematic diagram of another sliding window scan provided in an embodiment of the present application;
FIG. 2F is a schematic diagram of another embodiment of the present application for determining a first region;
FIG. 2G is a schematic diagram of another image processing method provided in the embodiments of the present application;
FIG. 2H is a schematic diagram illustrating another exemplary image processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
Detailed Description
Embodiments of the present application are described below with reference to the drawings.
The following first describes a scenario in which the present image method is applied. As shown in fig. 1, after a first user uses a mobile phone (a rectangular screen) to perform self-shooting, the shot photo is transmitted to a second user, and the second user uses a circular smart watch to view the photo, so if the rectangular photo is directly displayed on the circular smart watch, in order to ensure the integrity of photo information, in the existing scheme, the rectangular photo is usually directly connected in a circular display area of the smart watch, and the integrity of information can be ensured by the above method, but a large number of areas (black areas) that are not displayed are present in the circular display area, so that the display effect when the image is viewed is reduced.
The image display method and the image display device aim at solving the problem that the display effect is low, the image display area of the image to be displayed is determined through the display area information of the target terminal and the position information of the key points of the image to be displayed, the image in the image display area is displayed, and the display effect when the image to be displayed is improved when the information of the image to be displayed is guaranteed to the greatest extent due to the fact that the image display area comprises the information of the key points.
Referring to fig. 2A, fig. 2A is a schematic flowchart illustrating an image processing method according to an embodiment of the present disclosure. As shown in fig. 2A, the image processing method may be applied to an image acquisition terminal, which may be understood as a terminal or the like that takes an image to be displayed, such as a rectangular-screen mobile phone, and the image processing method includes:
s201, obtaining display area information of a target terminal, and obtaining position information of M key points in an image to be displayed.
The target terminal may be understood as a terminal that needs to perform image display, for example, a smart watch in the foregoing embodiment. The display area information may include the shape and size of the screen of the target terminal, and the like. The shape of the screen of the target terminal may be circular, elliptical, heart-shaped, etc., and is only exemplified here.
The display area information of the target terminal may be acquired through interaction with the target terminal, for example, the target terminal may send the display area information to the image acquisition terminal, or, after receiving the identification information sent by the target terminal, the image acquisition terminal may acquire the display area information according to the identification information, for example, if the identification information is a product model, a manufacturer, or the like of the target terminal, the display area information may be acquired by querying.
The M key points in the image to be displayed may be understood as points where the key information in the image to be displayed is located, for example, if the key information is a face image of a user, the key point is a center point of the face image, and if the key information is fruit (apple), the key point is a center point of the apple. Certainly, the key point may also be a key area, and the key area may be an area where the key information is located, for example, an area where a face image of the user is located, an area where an apple is located, and the like. And are intended to be illustrative only and not limiting in any way.
The position information of the acquired key points may be acquired by image processing techniques, such as image positioning techniques, feature extraction, and the like. Of course, the position information of the key point may also be obtained in other manners, for example, the position information of the key point may be obtained by a manner of actively marking by a user, and the like.
S202, determining a target display area in the image to be displayed according to the display area information and the position information.
Determining the characteristic information of an image cutting window according to the display area information and the image to be displayed, cutting in a sliding window mode according to the image cutting window to obtain a plurality of candidate display areas, and screening the candidate display areas to obtain a target display area.
And cutting in a sliding window mode to obtain a candidate display area, wherein the candidate display area comprises the M key points, and if the key points are areas, the candidate display area comprises the M key areas.
The minimum value of the sum of the distances between the M key points in the candidate display areas and the center of the candidate display area may be determined as the target display area. Or the candidate display area corresponding to the minimum value of the distance between the midpoint of the candidate display area and the center of the image to be displayed. Of course, the target display area can also be determined from the candidate display areas by other optimal selection methods.
And S203, sending the image in the target display area to the target terminal.
And sending the image in the target display area to the target terminal, wherein the image can be sent to the target terminal in a wireless sending mode, a scanning sharing mode or a transmission mode after wired connection.
Of course, the feature information of the target display area may also be sent to the target terminal, and the feature information may be a central point of the target display area, or the like.
In this example, according to the display area information of the target terminal and the position information of the key points in the image to be displayed, the target display area of the image to be displayed on the target terminal is determined, and the image of the target display area is sent to the target terminal, so that the image in the target display area can be displayed, and compared with the prior art, the display is performed in a direct scaling mode, and the practicability of displaying the content in the image to be displayed can be improved.
In one possible implementation, a possible method for determining a target display area according to display area information and position information of M key points includes:
a1, determining first characteristic information of an image cropping window according to an image to be displayed and display area information;
a2, determining N first areas according to first characteristic information of an image cropping window, wherein the first areas comprise M key points;
a3, obtaining distance values between the center of each first region in the N first regions and the M key points to obtain a distance value set corresponding to each first region in the N first regions;
a4, determining a second area according to the distance value set corresponding to each of the N first areas;
and A5, determining the overlapping area of the second area and the image to be displayed as a target display area.
The first characteristic information includes a shape, a size, and the like. The shape of the image cropping window and the shape of the display area are the same, for example, the shape of the real area is circular, the shape of the image cropping window is circular, the size of the image cropping window is determined according to the size of the image to be displayed, the image cropping window can be internally connected with the short edge of the image to be displayed, and the size of the image cropping window can also be determined according to the feature point and a reference point of the image to be displayed. As shown in FIG. 2B, FIG. 2B illustrates the image cutout window bordering the short side of the image to be displayed. The image cutout window here is illustrated as a circle. The key area is taken as an example for explanation.
According to the first feature information, the method for determining the N first regions may be to determine the N first regions in a sliding window manner, and specifically may be to specifically: and the image cropping window performs sliding window scanning from the initial position of the image to be detected in the sliding direction and the sliding step length, and the area where the sliding window comprising M key points is located is determined as a first area, wherein the number of the first areas can be multiple. As shown in fig. 2C. The key area is taken as an example for explanation.
The center of the first area may be understood as a geometric center of the first area, which is located on a preset sliding window scan line. The sliding window scan scans along the sliding window scan line.
The sum of elements in the distance value set may be obtained to obtain a distance value corresponding to the distance value set, and a first region in which the distance value set corresponding to the minimum distance value is located may be determined as a second region.
The target display area may be an overlapping area of the second area and the image to be displayed, which may be understood as that, if the second area is not inscribed in the short side of the image to be displayed, the area to be displayed is the overlapping area, and the area outside the overlapping area does not need to be displayed.
In this example, a distance value set is obtained through a distance value between the center of each first region and a key point, a second region is determined according to the distance value set, an overlapping region of the second region and an image to be displayed is determined as a target display region, the second region which meets a rule can be determined from the plurality of first regions, so that the precision of the display region is improved, the overlapping region of the second region and the image to be displayed is determined as the target display region, and the accuracy of determining the target display region is further improved.
In a possible implementation manner, when the first feature information is a geometric center of the image cropping window, the N first areas may be obtained by:
b1, obtaining a first straight line, wherein the first straight line is a symmetry axis of an image to be displayed;
and B2, performing sliding window scanning on the geometric center of the image cutting window on a first straight line according to a preset step length to obtain N first areas.
The preset step length is set by an empirical value or historical data, and the specific scanning mode may refer to the scanning mode shown in fig. 2D.
In a possible implementation manner, if one first area cannot be determined by the sliding window scanning manner, the size of the image cropping window may be increased, and then the sliding window scanning may be performed until N first areas are determined. A specific scanning manner may be the scanning manner shown in fig. 2E, where (1) in fig. 2E shows R = a, (2) shows R = b, and (3) shows R = c, where a, b, and c are diameters of an image cropping window, a, b, and c sequentially increase, the increased value may be a fixed value set by an empirical value or historical data, or the increased value may be a variable value, for example, a difference between b and a is smaller than a difference between c and b.
In a possible implementation manner, when the first feature information includes shape information, the first area may be further obtained by the following method, which is specifically as follows:
c1, acquiring a reference point in the image to be displayed, wherein the reference point is an intersection point of a first straight line and the boundary of the image to be displayed, and the first straight line is a symmetry axis of the image to be displayed;
c2, obtaining distance values between the reference point and the M key points to obtain M first distance values;
c3, obtaining a target key point, wherein the target key point is a key point corresponding to the maximum value in the M first distance values;
and C4, determining N first areas according to the reference point, the target key point, the shape information and the first straight line.
As shown in fig. 2F, the reference point may be a point a in the figure, and the first line is a L1 line in the figure.
The distance values between the reference point and the M key points can be obtained through a distance calculation method, and M first distance values are obtained.
The center point of the first area is arranged on the first straight line, and the center point of the first area on the first straight line can be determined according to the same distance from the midpoint to the target key point and the reference key point. According to the center point and the shape information of the first area, N first areas can be determined. Here, if there are a plurality of target key points, the number of first regions may be a plurality.
In this example, the accuracy of acquiring the first region can be improved by acquiring the reference point in the image to be displayed, determining the target key point according to the first distance value between the reference point and the key point, and determining the N first regions according to the key point.
In a possible implementation manner, the target display area may be obtained by the following method:
d1, carrying out scaling processing on the image to be displayed according to the position information to obtain a first image, wherein the first image is inscribed in a display frame graph corresponding to the display area information;
and D2, determining the area where the first image is located as a target display area.
And carrying out scaling processing on the image to be displayed to obtain a first image, wherein the first image specifically comprises M key points. And during zooming, updating the positions of the key points to obtain the positions of the key points in the first image.
Here, the frame graph of the display screen that can be inscribed in the target terminal after the image to be displayed is scaled is shown, for example, the frame of the display screen is a circle, and can be inscribed in the circle.
In a possible implementation manner, the image processing method may also be applied to video processing, for example, the image obtaining terminal may also obtain a video, process each frame of image in the video through the image processing method so that the processed image may be better displayed on the target terminal, combine the processed images to obtain a processed video, and send the processed video to the target terminal. Certainly, the image acquisition terminal and the target terminal may perform a video call, process the video stream after the image acquisition terminal acquires the image or the video stream, and then send the processed video stream to the target terminal, so as to better display the image during the video call.
A specific example is shown in fig. 2G and 2H, and when performing video chat or remote control shooting preview through a circular smart watch, synchronous rendering of real-time processing of a lens frame is described here by taking a rectangular-screen mobile phone and a circular smart watch as examples. When the mobile phone takes a picture, the mobile phone camera detects the picture taking direction of the user in real time, the picture taking direction comprises a horizontal screen and a vertical screen, when the picture taking direction is detected to be the vertical screen, the method shown in the figure 2G is executed, and when the picture taking direction is detected to be the horizontal screen, the method shown in the figure 2H is executed. If the photographing direction of the mobile phone is vertical screen, executing an algorithm for detecting the head position of the person in real time by the vertical screen of the mobile phone camera, specifically comprising the following steps:
(1) Checking whether all heads of people are positioned in a certain longitudinal inscribed round sliding window;
(2) If the detection result is yes, the mobile phone camera sends a Flag =1 instruction to the watch camera, and carries the position of the center of the circle, and the mobile phone camera is switched to the full-screen circular imaging mode.
(3) If the detection result is negative, checking whether the heads of all the persons are positioned in the transverse inscribed circle;
(4) If the detection result is yes, the mobile phone camera sends a Flag =2 instruction to the watch camera, and the mobile phone camera is switched to an internally tangent rectangular imaging mode;
(5) And (4) if the check result in the step (3) is negative, the mobile phone camera sends a Flag =3 instruction to the watch camera, and the mobile phone camera is switched to the internally tangent rectangular imaging mode.
If the mobile phone photographing direction is a horizontal screen, executing an algorithm for detecting the head position of the person in real time by using the horizontal screen of the mobile phone camera, specifically as follows:
(1) Checking whether all heads of people are positioned in a certain transverse inscribed round sliding window;
(2) If the detection result is yes, the mobile phone camera sends a Flag =4 instruction to the watch camera, and carries the position of the circle center, and the mobile phone camera is switched to the full-screen circular imaging mode.
(3) If the detection result is negative, checking whether all the heads of the persons are positioned in the longitudinal inscribed circle;
(4) If the detection result is yes, the mobile phone camera sends a Flag =5 instruction to the watch camera, and the mobile phone camera is switched to an internally tangent rectangular imaging mode;
(5) And (4) if the check result in the step (3) is negative, the mobile phone camera sends a Flag =6 instruction to the watch camera, and the mobile phone camera is switched to the internally tangent rectangular imaging mode.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 3, the image processing apparatus 30 includes:
the acquiring unit 301 is configured to acquire display area information of a target terminal and acquire position information of M key points in an image to be displayed;
a determining unit 302, configured to determine a target display area in the image to be displayed according to the display area information and the position information;
a sending unit 303, configured to send the image in the target display area to the target terminal.
In a possible implementation, the determining unit 302 is configured to:
determining first characteristic information of an image cutting window according to the image to be displayed and the display area information;
determining N first areas according to first characteristic information of the image cropping window, wherein the first areas comprise M key points;
obtaining a distance value between the center of each of the N first regions and the M key points to obtain a distance value set corresponding to each of the N first regions;
determining a second region according to the distance value set corresponding to each of the N first regions;
and determining the overlapping area of the second area and the image to be displayed as a target display area.
In a possible implementation manner, the first feature information includes a geometric center, and in determining the N first areas according to the first feature information of the image cropping window, the determining unit 302 is configured to:
acquiring a first straight line, wherein the first straight line is a symmetry axis of an image to be displayed;
and carrying out sliding window scanning on the geometric center of the image cutting window on a first straight line according to a preset step length to obtain N first areas.
In a possible implementation manner, in terms that the first feature information includes shape information, and the N first areas are determined according to the first feature information of the image cropping window, the determining unit 302 is configured to:
acquiring a reference point in the image to be displayed, wherein the reference point is an intersection point of a first straight line and the boundary of the image to be displayed, and the first straight line is a symmetry axis of the image to be displayed;
obtaining distance values between the reference point and the M key points to obtain M first distance values;
acquiring a target key point, wherein the target key point is a key point corresponding to the maximum value of the M first distance values;
and determining N first areas according to the reference point, the target key point, the shape information and the first straight line.
In one possible implementation, the determining unit 302 is configured to:
scaling the image to be displayed according to the position information to obtain a first image, wherein the first image is inscribed in the display frame graph corresponding to the display area information;
and determining the area where the first image is located as a target display area.
In the present embodiment, the image processing apparatus 300 is presented in the form of a unit. An "element" may refer to an application-specific integrated circuit (ASIC), a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other devices that may provide the described functionality. Further, the above acquisition unit 301 and determination unit 302 may be realized by the processor 401 of the image processing apparatus shown in fig. 4.
The image processing apparatus 40 as shown in fig. 4 may be implemented in the structure of fig. 4, and the image processing apparatus 400 includes at least one processor 401, at least one memory 402 and at least one communication interface 403. The processor 401, the memory 402 and the communication interface 403 are connected through the communication bus and perform communication with each other.
The processor 401 may be a general purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits configured to control the execution of programs according to the above schemes.
A communication interface 403 for communicating with other devices or communication Networks, such as ethernet, radio Access Network (RAN), wireless Local Area Network (WLAN), etc.
The Memory 402 may be a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The memory 402 is used for storing application program codes for executing the above scheme, and the processor 401 is used for controlling the execution. The processor 401 is configured to execute application code stored in the memory 402.
The codes stored in the memory 402 can execute the image processing method provided above, acquire the display area information of the target terminal, and acquire the position information of M key points in the image to be displayed; determining a target display area in the image to be displayed according to the display area information and the position information; and sending the image in the target display area to the target terminal.
The present application further provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and when the program is executed, the program includes some or all of the steps of any one of the image processing methods described in the above method embodiments.
It should be noted that for simplicity of description, the above-mentioned method embodiments are shown as a series of combinations of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently preferred and that no acts or modules are required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps of the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, the memory including: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. An image processing method, characterized in that the method comprises:
acquiring display area information of a target terminal and acquiring position information of M key points in an image to be displayed;
determining a target display area in the image to be displayed according to the display area information and the position information;
sending the image in the target display area to the target terminal;
the determining the target display area in the image to be displayed specifically includes:
determining first characteristic information of an image cropping window according to the image to be displayed and the display area information; the first feature information includes shape information;
acquiring a reference point in the image to be displayed, wherein the reference point is an intersection point of a first straight line and the boundary of the image to be displayed, and the first straight line is a symmetry axis of the image to be displayed;
obtaining distance values between the reference point and the M key points to obtain M first distance values; m is a positive integer greater than or equal to 1;
acquiring a target key point, wherein the target key point is a key point corresponding to the maximum value in the M first distance values;
determining N first regions according to the reference point, the target key point, the shape information and the first straight line, wherein the first regions comprise the M key points; n is a positive integer greater than or equal to 1;
obtaining a distance value between the center of each of the N first regions and the M key points to obtain a distance value set corresponding to each of the N first regions;
determining a second region according to the set of distance values corresponding to each of the N first regions;
and determining the overlapping area of the second area and the image to be displayed as a target display area.
2. The method according to claim 1, wherein the determining the second region from the set of distance values corresponding to each of the N first regions comprises:
respectively calculating a first distance value of each distance value set to obtain N first distance values, wherein the first distance values are the sum of elements in a single distance value set;
and determining a first region of the distance value set corresponding to the minimum first distance value as the second region in the N first distance values.
3. An image processing apparatus, characterized in that the apparatus comprises:
the display device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring display area information of a target terminal and acquiring position information of M key points in an image to be displayed; m is a positive integer greater than or equal to 1;
the determining unit is used for determining a target display area in the image to be displayed according to the display area information and the position information;
the sending unit is used for sending the image in the target display area to the target terminal;
wherein the determination unit is configured to:
determining first characteristic information of an image cropping window according to the image to be displayed and the display area information; the first feature information includes shape information;
acquiring a reference point in the image to be displayed, wherein the reference point is an intersection point of a first straight line and the boundary of the image to be displayed, and the first straight line is a symmetry axis of the image to be displayed;
obtaining distance values between the reference point and the M key points to obtain M first distance values;
acquiring a target key point, wherein the target key point is a key point corresponding to the maximum value in the M first distance values;
determining N first regions according to the reference point, the target key point, the shape information and the first straight line, wherein the first regions comprise the M key points; n is a positive integer greater than or equal to 1;
obtaining a distance value between the center of each of the N first regions and the M key points to obtain a distance value set corresponding to each of the N first regions;
determining a second region according to the distance value set corresponding to each of the N first regions;
and determining the overlapping area of the second area and the image to be displayed as a target display area.
4. The apparatus according to claim 3, wherein the determining the second region according to the set of distance values corresponding to each of the N first regions specifically comprises:
respectively calculating first distance values of each distance value set to obtain N first distance values, wherein the first distance values are the sum of elements in a single distance value set;
and determining a first region of the distance value set corresponding to the minimum first distance value as the second region in the N first distance values.
5. An apparatus, comprising:
a memory to store instructions; and
at least one processor coupled to the memory;
wherein the instructions, when executed by the at least one processor, cause the processor to perform the method of any of claims 1-2.
6. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-2.
CN202010879120.9A 2020-08-27 2020-08-27 Image processing method and related device Active CN114125567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010879120.9A CN114125567B (en) 2020-08-27 2020-08-27 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010879120.9A CN114125567B (en) 2020-08-27 2020-08-27 Image processing method and related device

Publications (2)

Publication Number Publication Date
CN114125567A CN114125567A (en) 2022-03-01
CN114125567B true CN114125567B (en) 2022-12-13

Family

ID=80374582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010879120.9A Active CN114125567B (en) 2020-08-27 2020-08-27 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN114125567B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105594194A (en) * 2013-10-01 2016-05-18 奥林巴斯株式会社 Image display device and image display method
CN110223306A (en) * 2019-06-14 2019-09-10 北京奇艺世纪科技有限公司 A kind of method of cutting out and device of image
CN110611787A (en) * 2019-06-10 2019-12-24 青岛海信电器股份有限公司 Display and image processing method
CN111145093A (en) * 2019-12-20 2020-05-12 北京五八信息技术有限公司 Image display method, image display device, electronic device, and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004302785A (en) * 2003-03-31 2004-10-28 Honda Motor Co Ltd Image transmitting apparatus of mobile robot
US20040228505A1 (en) * 2003-04-14 2004-11-18 Fuji Photo Film Co., Ltd. Image characteristic portion extraction method, computer readable medium, and data collection and processing device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105594194A (en) * 2013-10-01 2016-05-18 奥林巴斯株式会社 Image display device and image display method
CN110611787A (en) * 2019-06-10 2019-12-24 青岛海信电器股份有限公司 Display and image processing method
CN110223306A (en) * 2019-06-14 2019-09-10 北京奇艺世纪科技有限公司 A kind of method of cutting out and device of image
CN111145093A (en) * 2019-12-20 2020-05-12 北京五八信息技术有限公司 Image display method, image display device, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114125567A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
EP3762899B1 (en) Object segmentation in a sequence of color image frames based on adaptive foreground mask upsampling
US8432455B2 (en) Method, apparatus and computer program product for automatically taking photos of oneself
EP3762898B1 (en) Object segmentation in a sequence of color image frames by background image and background depth correction
CN106599758A (en) Image quality processing method and terminal
CN112954450A (en) Video processing method and device, electronic equipment and storage medium
EP4191513A1 (en) Image processing method and apparatus, device and storage medium
CN112017137A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN104967786A (en) Image selection method and device
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
US20200090309A1 (en) Method and device for denoising processing, storage medium, and terminal
CN113379713B (en) Certificate image detection method and device
US10552974B2 (en) Association methods and association devices
CN112990197A (en) License plate recognition method and device, electronic equipment and storage medium
CN114125567B (en) Image processing method and related device
CN107733874B (en) Information processing method, information processing device, computer equipment and storage medium
EP3461138B1 (en) Processing method and terminal
US9092661B2 (en) Facial features detection
CN109842791B (en) Image processing method and device
CN110740256B (en) Doorbell camera cooperation method and related product
CN114155268A (en) Image processing method, image processing device, electronic equipment and storage medium
CN107256137B (en) Picture processing method and device
US20110063319A1 (en) Apparatus and method of filtering geographical data
CN114820638A (en) Image processing method, related device, equipment and computer readable storage medium
CN112422827B (en) Information processing method, device and equipment and storage medium
EP4358019A1 (en) Image processing method, electronic device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant