CN113592874A - Image display method and device and computer equipment - Google Patents

Image display method and device and computer equipment Download PDF

Info

Publication number
CN113592874A
CN113592874A CN202010364395.9A CN202010364395A CN113592874A CN 113592874 A CN113592874 A CN 113592874A CN 202010364395 A CN202010364395 A CN 202010364395A CN 113592874 A CN113592874 A CN 113592874A
Authority
CN
China
Prior art keywords
image
area
target
region
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010364395.9A
Other languages
Chinese (zh)
Inventor
陈莹
何胜远
王梁
申川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010364395.9A priority Critical patent/CN113592874A/en
Priority to PCT/CN2021/089984 priority patent/WO2021218926A1/en
Publication of CN113592874A publication Critical patent/CN113592874A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image display method and device and computer equipment, and belongs to the technical field of image processing. The method comprises the following steps: acquiring an image and determining a face area in the image; determining a target area in the image according to the face area in the image, wherein the size of the target area is the same as that of a display interface; cutting out a target area in the image to obtain a target image; and displaying the target image in the display interface. As long as the user is in the image acquisition range, the face of the user can be contained in the target image displayed in the display interface as far as possible, so that the position range of face display can be enlarged, the flexibility of face display is improved, and the user experience can be greatly improved.

Description

Image display method and device and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image display method and apparatus, and a computer device.
Background
With the development of image processing technology, biometric access control systems oriented to artificial intelligence and deep learning are rapidly developed. The face recognition access control system is more and more widely applied to access control systems due to the fact that the face recognition access control system is rapid, convenient to pass and good in user experience.
At present, because of the limitation of the appearance of the device, a preview interface for a user to view his face is often displayed as a fixed scene, that is, an image at a fixed position of the acquisition range of a camera is displayed on the preview interface. In this case, the user must stand at the fixed position to see his face on the preview interface, which causes the face display position to be limited to a small range, causing inconvenience to the user.
Disclosure of Invention
The application provides an image display method, an image display device, computer equipment and a storage medium, which can enlarge the position capable of realizing face display.
In one aspect, an image display method is provided, the method including:
acquiring an image and determining a face area in the image;
determining a target area in the image according to the face area in the image, wherein the size of the target area is the same as that of a display interface;
cutting out a target area in the image to obtain a target image;
and displaying the target image in the display interface.
Optionally, the determining a face region in the image includes:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
Optionally, the determining a target region in the image according to the face region in the image includes:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, the determining a target region in the image according to the first region and the second region includes:
if the distance between the position of the first area and the position of the second area is smaller than or equal to a preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than a preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as a target area in the image.
Optionally, the method further comprises:
if the face area does not exist in the image, taking an area, with the same size as the display interface and the same center point as the center point of the image, in the image as a first area;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
In one aspect, there is provided an image display apparatus, the apparatus including:
a processor for determining a face region in the acquired image; determining a target area in the image according to the face area in the image, wherein the size of the target area is the same as that of a display interface; cutting out a target area in the image to obtain a target image;
and the display is used for displaying the target image in the display interface.
Optionally, the processor is configured to:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
Optionally, the processor is configured to:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, the processor is configured to:
if the distance between the position of the first area and the position of the second area is smaller than or equal to a preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than a preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as a target area in the image.
Optionally, the processor is further configured to:
if the face area does not exist in the image, taking an area, with the same size as the display interface and the same center point as the center point of the image, in the image as a first area;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, the apparatus further comprises:
and the image acquisition module is used for acquiring images.
In one aspect, there is provided an image display apparatus, the apparatus including:
the image acquisition module is used for acquiring an image;
the first determination module is used for determining a face area in the image;
the second determining module is used for determining a target area in the image according to the face area in the image, and the size of the target area is the same as that of the display interface;
the cutting module is used for cutting out a target area in the image to obtain a target image;
and the display module is used for displaying the target image in the display interface.
Optionally, the first determining module is configured to:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
Optionally, the second determining module is configured to:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, the second determining module is configured to:
if the distance between the position of the first area and the position of the second area is smaller than or equal to a preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than a preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as a target area in the image.
Optionally, the apparatus further comprises:
a third determining module, configured to, if a face region does not exist in the image, take a region in the image, where the size of the region is the same as that of the display interface, and a center point of the region is the same as that of the image, as a first region;
a fourth determining module, configured to use a target region in a previous frame of image acquired before the image is acquired as a second region;
and the fifth determining module is used for determining a target area in the image according to the first area and the second area.
In one aspect, a computer device is provided, which includes a processor and a memory, the memory is used for storing computer programs, and the processor is used for executing the programs stored in the memory to realize the steps of the image display method.
In one aspect, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the image display method described above.
In one aspect, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform the steps of the image display method described above.
The technical scheme provided by the application can at least bring the following beneficial effects:
after the image is acquired, the face area in the image is determined. And then, determining a target area in the image according to the face area in the image, and cutting out the target area in the image to obtain a target image. And finally, displaying the target image in the display interface. Therefore, as long as the user is in the image acquisition range, the target image displayed in the display interface can contain the face of the user as much as possible. Compared with the scheme that the user can only realize the face display by standing at a fixed position in the related art, the embodiment of the application enlarges the position range capable of realizing the face display, improves the flexibility of the face display and can greatly improve the user experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of an image display method provided in an embodiment of the present application;
FIG. 2 is a flow chart of another image display method provided in the embodiments of the present application;
fig. 3 is a schematic diagram of a face region and a first region provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a first region, a second region and a target region provided by an embodiment of the present application;
FIG. 5 is a flowchart of another image display method provided in the embodiments of the present application;
fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another image display device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of the present application, "/" indicates an OR meaning, for example, A/B may indicate A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the terms "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Fig. 1 is a flowchart of an image display method according to an embodiment of the present application. Referring to fig. 1, the method comprises the steps of:
step 101: and acquiring an image and determining a face region in the image.
Step 102: and determining a target area in the image according to the face area in the image, wherein the size of the target area is the same as that of the display interface.
Step 103: and cutting out a target area in the image to obtain a target image.
Step 104: and displaying the target image in the display interface.
In the embodiment of the application, after the image is acquired, the face region in the image is determined. And then, determining a target area in the image according to the face area in the image, and cutting out the target area in the image to obtain a target image. And finally, displaying the target image in the display interface. Therefore, as long as the user is in the image acquisition range, the target image displayed in the display interface can contain the face of the user as much as possible. Compared with the scheme that the user can only realize the face display by standing at a fixed position in the related art, the embodiment of the application enlarges the position range capable of realizing the face display, improves the flexibility of the face display and can greatly improve the user experience.
Optionally, determining a face region in the image includes:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
Optionally, determining a target region in the image according to the face region in the image includes:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, determining a target region in the image according to the first region and the second region includes:
if the distance between the position of the first area and the position of the second area is smaller than or equal to the preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than the preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as the target area in the image.
Optionally, the method further comprises:
if the face area does not exist in the image, taking an area, with the same size as the display interface and the same center point as the center point of the image, in the image as a first area;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present application, and the present application embodiment is not described in detail again.
Fig. 2 is a flowchart of an image display method according to an embodiment of the present application. Referring to fig. 2, the method includes the following steps.
Step 201: an image is acquired and a target area in the image is determined.
When the image is acquired, the frame of image can be acquired every time the camera acquires the frame of image in the image acquisition process.
The camera can acquire images in real time. The camera may be a camera used in an access control device. On the one hand, the image collected by the camera can be used for face recognition, the entrance guard can be opened by the entrance guard equipment when the face recognition is successful, and the entrance guard can be kept when the face recognition is failed. On the other hand, the image acquired by the camera can be previewed by the user, namely can be used for displaying on a display screen in the access control device, and can be displayed specifically according to the image display method provided by the embodiment of the application.
Of course, the image display method provided in the embodiment of the present application may be applied not only to an access control device, but also to other devices that require image display, such as a payment device.
The size of the target area is the same as the size of the display interface. That is, the width and height of the target area are consistent with the screen display resolution. In this way, the image of the target area can be directly displayed in the display interface.
It is noted that each time a frame of image is captured by the camera, the target area in the image can be determined, so as to subsequently implement image preview in the display interface accordingly. That is, the embodiment of the application can implement real-time preview of the image acquired by the camera in the image acquisition process.
When determining the target area in the image, if the face area exists in the image, determining the target area in the image according to a first possible manner as follows; if the face region does not exist in the image, the target region in the image can be determined in the following second possible manner.
A first possible way: determining a face region in the image; and determining a target area in the image according to the face area in the image.
Note that the face region in the image is a region in which a face exists in the image.
When the face region in the image is determined, one or more detection frames in the image can be determined, and a region indicated by one detection frame with the largest size in the one or more detection frames in the image is taken as the face region in the image.
Note that the detection frame contains a face, and the detection frame is used to indicate an area where the face exists. The area may be a generally rectangular area, and thus the dimensions of the detection box may generally include the width and height of the detection box.
In addition, the face contained in the detection frame with the largest size in the one or more detection frames in the image is most likely to be the face of the person who is currently using the camera, so the region indicated by the detection frame with the largest size in the one or more detection frames in the image can be taken as the face region in the image. For example, the region indicated by the widest or highest detection frame of the one or more detection frames in the image may be used as the face region in the image.
There are various ways to determine one or more detection frames in the image. For example, the image may be input to a face detection model, and the face detection model outputs the position of each detection frame in the image, where the position of the detection frame may include the size of the detection frame and the coordinates of a position point (e.g., an upper left corner point, a lower left corner point, an upper right corner point, a lower right corner point, or a central point). Of course, one or more detection frames in the image may be determined in other ways, which is not limited in this application.
It should be noted that the face detection model may be a pre-trained model capable of determining a detection frame in which a face appearing in the image is located, for example, the face detection model may be CNN (Convolutional Neural Network), and the like.
In addition, if the coordinates of the position point included in the position of the detection frame are not the coordinates of the center point, the coordinates of the center point of the face region may be determined according to the size of the face region and the coordinates of the position point after the region indicated by the detection frame with the largest size in one or more detection frames in the image is taken as the face region in the image.
The operation of determining the target region in the image according to the face region in the image may be: and taking the area with the same size as the display interface and the same central point as the central point of the face area in the image as the target area in the image. Or, taking the area with the same size as the display interface and the same central point as the central point of the face area in the image as a first area; taking a target area in the last frame of image acquired before the image is acquired as a second area; and determining a target area in the image according to the first area and the second area.
It should be noted that, when the face region exists in the image, the boundary of the first region in the image is obtained by expanding or contracting the boundary of the face region in the image. That is, when the size of the face region in the image is smaller than the size of the display interface, the boundary of the first region in the image is obtained by extending the boundary of the face region in the image; when the size of the face area in the image is larger than that of the display interface, the boundary of the first area in the image is obtained by retracting the boundary of the face area in the image; when the size of the face area in the image is equal to the size of the display interface, the face area in the image can be directly used as the first area.
For example, as shown in fig. 3, it is assumed that the size of the face region in the image is smaller than the size of the display interface, the central point of the face region in the image is M points, and the width of the display interface is a and the height of the display interface is b. Then the center point (i.e. point M) of the face region in the image is taken as a starting point, the region boundary is extended to the half of the width of the display interface (i.e. a of 1/2) in the width direction, the region boundary is extended to the half of the height of the display interface (i.e. b of 1/2) in the height direction, and the region surrounded by the extended region boundary is the first region. At this time, the size of the first region in the image is the same as that of the display interface, and the center point is the same as that of the face region in the image.
In addition, if the boundary of the first region obtained by extending the boundary of the face region in the image exceeds the boundary of the image, the boundary of the first region may be adjusted. For example, the border of the first region may be moved within the image in the width direction and/or the height direction until the border of the first region is just within the image.
It should be noted that the image is an image currently captured by the camera. The image of the target area in the last frame of image acquired by the camera before the image is acquired is the image currently displayed on the display interface. That is, the image of the second area is the image that was being displayed on the display interface when the image was captured by the camera.
In addition, in the embodiment of the present application, the area of the image in the current image for display on the display interface in the current image may be determined by combining the first area and the second area, that is, combining the area where the face of the person in the current image is located and the area where the image being displayed on the display interface is located in the previous image. In this way, smooth display of an image is facilitated.
Wherein, according to the first region and the second region, the operation of determining the target region in the image may be: and if the distance between the position of the first area and the position of the second area is smaller than or equal to the preset distance, taking the first area as a target area in the image. If the distance between the position of the first area and the position of the second area is greater than the preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as the target area in the image.
It should be noted that the preset distance may be set according to an actual use requirement, and the preset distance may be set to be smaller.
In addition, since the size of the first area and the size of the second area are the same, the position of the first area and the position of the second area can both be indicated by corresponding position points. That is, the distance between the position of the first region and the position of the second region may be measured by the corresponding position point in the first region and the second region, which may be a corner point or a center point.
For example, the distance between the upper left corner point of the first region and the upper left corner point of the second region may be taken as the distance between the position of the first region and the position of the second region; or, the distance between the lower left corner point of the first region and the lower left corner point of the second region may be taken as the distance between the position of the first region and the position of the second region; or, the distance between the upper right corner point of the first region and the upper right corner point of the second region may be taken as the distance between the position of the first region and the position of the second region; or, the distance between the lower right corner point of the first area and the lower right corner point of the second area may be taken as the distance between the position of the first area and the position of the second area; alternatively, the distance between the center point of the first region and the center point of the second region may be taken as the distance between the position of the first region and the position of the second region.
If the distance between the position of the first region and the position of the second region is less than or equal to the preset distance, it indicates that the position of the first region is relatively close to the position of the second region, at this time, the image display converted from the image display of the second region to the image display of the first region is relatively smooth, so that the first region can be directly used as a target region in the image, so that the region where the face is located in the current image can be directly displayed later.
If the distance between the position of the first region and the position of the second region is greater than the preset distance, it is indicated that the position of the first region is relatively far away from the position of the second region, at this time, a display screen may have a possibility of fluctuating jump when the image display is switched from the image display of the second region to the image display of the first region, and therefore, at this time, the position of the target region in the image is obtained by moving the position of the second region to the direction of the position of the first region by the preset distance instead of taking the first region as the target region in the image. That is to say, in this case, the position of the image being displayed on the display interface is close to the position of the face in the current image by a fixed step length (i.e., a preset distance), so that smooth transition of the display screen can be realized.
The operation of moving the position of the second area to the direction of the position of the first area by a preset distance to obtain the target position can be realized by adjusting the position of the second area.
It is assumed that the position of the first region and the position of the second region are both indicated by the top left corner point. The coordinate of the upper left corner point of the first region is (X)1,Y1) The coordinate of the upper left corner of the second region is (X)2,Y2) If the predetermined distance is v, then X1And X2The difference is greater than v, Y1And Y2The difference is also greater than v. In this case, the target position includes a size, which is the size of the first region, and coordinates of an upper left corner point, which is obtained by adjusting coordinates of an upper left corner point of the second region, that is, coordinates of the upper left corner point are adjusted (X)2,Y2)。
If it is X1-X2V, the abscissa X of the upper left corner point included in the target position2Is X2+ v; if it is X2-X1V, the abscissa X of the upper left corner point included in the target position2Is X2-v. If it is Y1-Y2V, the ordinate Y of the upper left corner included in the target position2Is Y2+v(ii) a If it is Y2-Y1V, the ordinate Y of the upper left corner included in the target position2Is Y2-v. The code for this process is implemented as follows:
if(X2>X1+v)
{
X2-=v;
}
else if(X2<X1–v)
{
X2+=v;
}
if(Y2>Y1+v)
{
Y2-=v;
}
else if(Y2<Y1–v)
{
Y2+=v;
}
for example, as shown in fig. 4, the coordinate of the upper left corner point of the first region is (X)1,Y1) The coordinate of the upper left corner of the second region is (X)2,Y2) The preset distance is v. And, X1-X2>v,Y1-Y2V. The second area may be moved to the right by v and moved down by v, and the position of the moved second area is the position of the target area.
A second possible way: if the face area does not exist in the image, taking an area, with the same size as the display interface and the same center point as the center point of the image, in the image as a first area; taking a target area in the last frame of image acquired before the image is acquired as a second area; and determining a target area in the image according to the first area and the second area.
It should be noted that, when the face region does not exist in the image, the first region in the image is a central region of the image, a central point of the central region of the image is a central point of the image, and a size of the central region of the image is a size of the display interface.
In addition, the image is the image currently captured by the camera. The image of the target area in the last frame of image acquired by the camera before the image is acquired is the image currently displayed on the display interface. That is, the image of the second area is the image that was being displayed on the display interface when the image was captured by the camera.
Furthermore, in the embodiment of the present application, the area of the image in the current image for display on the display interface in the current image may be determined by combining the first area and the second area, that is, combining the central area of the current image and the area of the image in the last frame of image that is being displayed on the display interface. In this way, smooth display of an image is facilitated.
Wherein, according to the first region and the second region, the operation of determining the target region in the image may be: and if the distance between the position of the first area and the position of the second area is smaller than or equal to the preset distance, taking the first area as a target area in the image. If the distance between the position of the first area and the position of the second area is greater than the preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as the target area in the image.
It should be noted that the preset distance may be set according to an actual use requirement, and the preset distance may be set to be smaller.
In addition, since the size of the first area and the size of the second area are the same, the position of the first area and the position of the second area can both be indicated by corresponding position points. That is, the distance between the position of the first region and the position of the second region may be measured by the corresponding position point in the first region and the second region, which may be a corner point or a center point.
If the distance between the position of the first region and the position of the second region is less than or equal to the preset distance, which indicates that the position of the first region is relatively close to the position of the second region, the image display converted from the image display of the second region to the image display of the first region is relatively smooth, so that the first region can be directly used as a target region in the image, and the central region of the current image can be directly displayed in the following process.
If the distance between the position of the first region and the position of the second region is greater than the preset distance, it is indicated that the position of the first region is relatively far away from the position of the second region, at this time, a display screen may have a possibility of fluctuating jump when the image display is switched from the image display of the second region to the image display of the first region, and therefore, at this time, the position of the target region in the image is obtained by moving the position of the second region to the direction of the position of the first region by the preset distance instead of taking the first region as the target region in the image. That is, in this case, the position of the image being displayed on the display interface is moved closer to the center position of the current image by a fixed step (i.e., a preset distance), so that a smooth transition of the display screen can be realized.
Step 202: and cutting out a target area in the image to obtain a target image.
The image within the boundary of the target area in the image is the target image. Since the size of the target area is the same as the size of the display interface, the resolution of the target image is the same as the resolution of the display interface.
Step 203: and displaying the target image in the display interface.
Because the resolution ratio of the target image is the same as that of the display interface, the target image can be clearly and completely displayed in the display interface, so that the preview picture is not distorted, and the preview effect is good.
In the embodiment of the application, after the image is acquired, the face region in the image is determined. And then, determining a target area in the image according to the face area in the image, and cutting out the target area in the image to obtain a target image. And finally, displaying the target image in the display interface. Therefore, as long as the user is in the image acquisition range, the target image displayed in the display interface can contain the face of the user as much as possible. Compared with the scheme that the user can only realize the face display by standing at a fixed position in the related art, the embodiment of the application enlarges the position range capable of realizing the face display, improves the flexibility of the face display and can greatly improve the user experience.
In addition, since the camera is usually fixed, the background and the position range of the image acquired by the camera are also fixed. In the embodiment of the application, the face region can be detected firstly, and then the new preview region is gradually close to the detected face region, so that the face can be contained in the previewed target image as far as possible, a similar preview effect of tracking the face can be generated, and the visual effect and the user experience can be obviously improved.
For ease of understanding, the image display method provided in the embodiment of the present application is exemplified below with reference to fig. 5. Referring to fig. 5, in the image capturing process, each time an image captured by the camera is captured, the image display may be implemented as follows.
Step 501: and detecting whether a human face exists in the image. If the face exists, continuing to execute the following steps 502-504; if no human face exists, the following steps 505 to 508 are continuously executed.
Step 502: a face region in the image is determined.
Step 503: and taking the area with the same size as the display interface and the same center point as the center point of the face area in the image as a first area.
Step 504: and if the distance between the position of the first area and the position of the second area is greater than the preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain the position of the target area in the image.
Wherein the second region is a target region in a previous frame of image acquired by the camera before the image is acquired.
Step 505: judging whether the target area in the previous frame of image is the central area thereof; if yes, go on to step 506; if not, go on to step 507-step 508.
The central point of the central area is the same as the central point of the image, and the size of the central area is the same as that of the display interface.
Step 506: the central region of the image is taken as the target region in the image.
Step 507: the central region of the image is taken as the first region.
Step 508: and if the distance between the position of the first area and the position of the second area is greater than the preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain the position of the target area in the image.
After the target area in the image is obtained through the above step 504, step 506 or step 508, the following steps 509 to 510 may be continuously performed to realize image display.
Step 509: and cutting out a target area in the image to obtain a target image.
Step 510: and displaying the target image in the display interface.
Fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present application. Referring to fig. 6, the apparatus includes:
an image acquisition module 601, configured to acquire an image;
a first determining module 602, configured to determine a face region in the image;
a second determining module 603, configured to determine a target area in the image according to the face area in the image, where the size of the target area is the same as the size of the display interface;
a cutting module 604, configured to cut out a target area in the image to obtain a target image;
a display module 605, configured to display the target image in the display interface.
Optionally, the first determining module 602 is configured to:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
Optionally, the second determining module 603 is configured to:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, the second determining module 603 is configured to:
if the distance between the position of the first area and the position of the second area is smaller than or equal to the preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than the preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as the target area in the image.
Optionally, the apparatus further comprises:
a third determining module, configured to, if a face region does not exist in the image, take a region in the image, where the size of the region is the same as that of the display interface, and a center point of the region is the same as that of the image, as the first region;
a fourth determining module, configured to use a target region in a previous frame of image acquired before the image is acquired as a second region;
and the fifth determining module is used for determining the target area in the image according to the first area and the second area.
In the embodiment of the application, an image is acquired, and a face region in the image is determined. And then, determining a target area in the image according to the face area in the image, and cutting out the target area in the image to obtain a target image. And finally, displaying the target image in the display interface. Therefore, as long as the user is in the image acquisition range, the target image displayed in the display interface can contain the face of the user as much as possible. Compared with the scheme that the user can only realize the face display by standing at a fixed position in the related art, the embodiment of the application enlarges the position range capable of realizing the face display, improves the flexibility of the face display and can greatly improve the user experience.
Fig. 7 is a schematic structural diagram of an image display device according to an embodiment of the present application. Referring to fig. 7, the apparatus includes:
a processor 701 configured to determine a face region in the acquired image; determining a target area in the image according to the face area in the image, wherein the size of the target area is the same as that of the display interface; cutting out a target area in the image to obtain a target image;
a display 702 for displaying the target image in the display interface.
Optionally, the processor 701 is configured to:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
Optionally, the processor 701 is configured to:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, the processor 701 is configured to:
if the distance between the position of the first area and the position of the second area is smaller than or equal to the preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than the preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as the target area in the image.
Optionally, the processor 701 is configured to:
if the face area does not exist in the image, taking an area, with the same size as the display interface and the same center point as the center point of the image, in the image as a first area;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
Optionally, the apparatus further comprises:
and the image acquisition module is used for acquiring images.
In the embodiment of the application, a human face area is determined in an acquired image. And then, determining a target area in the image according to the face area in the image, and cutting out the target area in the image to obtain a target image. And finally, displaying the target image in the display interface. Therefore, as long as the user is in the image acquisition range, the target image displayed in the display interface can contain the face of the user as much as possible. Compared with the scheme that the user can only realize the face display by standing at a fixed position in the related art, the embodiment of the application enlarges the position range capable of realizing the face display, improves the flexibility of the face display and can greatly improve the user experience.
It should be noted that: in the image display device provided in the above embodiment, when displaying an image, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the image display device and the image display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 8 is a schematic structural diagram of a terminal 800 according to an embodiment of the present application. The terminal 800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to perform operations performed in an image display method provided by method embodiments herein.
In some embodiments, the terminal 800 may further include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a touch screen display 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited in this application.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, disposed on a front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The positioning component 808 is used to locate the current geographic position of the terminal 800 for navigation or LBS (Location Based Service). The Positioning component 808 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 809 is used to provide power to various components in terminal 800. The power supply 809 can be ac, dc, disposable or rechargeable. When the power source 809 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815 and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user with respect to the terminal 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of terminal 800 and/or underneath touch display 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 814 is used for collecting a fingerprint of the user, and the processor 801 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 814 may be disposed on the front, back, or side of terminal 800. When a physical button or a vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch screen 805 based on the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also known as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually decreases, the processor 801 controls the touch display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 becomes gradually larger, the processor 801 controls the touch display 805 to switch from the screen-on state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 9 is a schematic structural diagram of a server 900 according to an embodiment of the present application. The server 900 may be a server in a background server cluster. Specifically, the method comprises the following steps:
the server 900 includes a CPU (Central Processing Unit) 901, a system Memory 904 including a RAM (Random Access Memory) 902 and a ROM (Read-Only Memory) 903, and a system bus 905 connecting the system Memory 904 and the Central Processing Unit 901. The server 900 also includes a basic I/O (Input/Output) system 906, which facilitates the transfer of information between devices within the computer, and a mass storage device 907 for storing an operating system 913, application programs 914, and other program modules 915.
The basic input/output system 906 includes a display 908 for displaying information and an input device 909 such as a mouse, keyboard, etc. for user input of information. Wherein a display 908 and an input device 909 are connected to the central processing unit 901 through an input/output controller 910 connected to the system bus 905. The basic input/output system 906 may also include an input/output controller 910 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, an input/output controller 910 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 907 is connected to the central processing unit 901 through a mass storage controller (not shown) connected to the system bus 905. The mass storage device 907 and its associated computer-readable media provide non-volatile storage for the server 900. That is, the mass storage device 907 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EPROM (Electrically Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, and CD-ROM, DVD (Digital Versatile disk) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 904 and mass storage device 907 may be collectively referred to as memory.
The server 900 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present application. That is, the server 900 may be connected to the network 912 through the network interface unit 911 connected to the system bus 905, or the network interface unit 911 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU. The one or more programs contain instructions for performing the operations performed in the image display methods provided by the method embodiments of the present application.
In some embodiments, a computer-readable storage medium is also provided, in which a computer program is stored, which when executed by a processor implements the steps of the image display method provided in the embodiment of fig. 2 described above. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to in the embodiments of the present application may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
In some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the image display method provided in the embodiment of fig. 2 described above.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. An image display method, characterized in that the method comprises:
acquiring an image and determining a face area in the image;
determining a target area in the image according to the face area in the image, wherein the size of the target area is the same as that of a display interface;
cutting out a target area in the image to obtain a target image;
and displaying the target image in the display interface.
2. The method of claim 1, wherein the determining the face region in the image comprises:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
3. The method of claim 1 or 2, wherein the determining the target region in the image according to the face region in the image comprises:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
4. The method of claim 3, wherein said determining a target region in the image from the first region and the second region comprises:
if the distance between the position of the first area and the position of the second area is smaller than or equal to a preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than a preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as a target area in the image.
5. The method of claim 1, wherein the method further comprises:
if the face area does not exist in the image, taking an area, with the same size as the display interface and the same center point as the center point of the image, in the image as a first area;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
6. An image display apparatus, characterized in that the apparatus comprises:
a processor for determining a face region in the acquired image; determining a target area in the image according to the face area in the image, wherein the size of the target area is the same as that of a display interface; cutting out a target area in the image to obtain a target image;
and the display is used for displaying the target image in the display interface.
7. The apparatus of claim 6, wherein the processor is to:
determining one or more detection frames in the image, wherein the detection frames contain human faces;
and taking the area indicated by the detection frame with the largest size in one or more detection frames in the image as the face area in the image.
8. The apparatus of claim 6 or 7, wherein the processor is to:
taking a region with the same size as the display interface and the same center point as the center point of the face region in the image as a first region;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
9. The apparatus of claim 8, wherein the processor is to:
if the distance between the position of the first area and the position of the second area is smaller than or equal to a preset distance, taking the first area as a target area in the image;
if the distance between the position of the first area and the position of the second area is greater than a preset distance, moving the position of the second area to the direction of the position of the first area by the preset distance to obtain a target position; and taking the area with the same position as the target position in the image as a target area in the image.
10. The apparatus of claim 6, wherein the processor is further configured to:
if the face area does not exist in the image, taking an area, with the same size as the display interface and the same center point as the center point of the image, in the image as a first area;
taking a target area in the last frame of image acquired before the image is acquired as a second area;
and determining a target area in the image according to the first area and the second area.
11. The apparatus of claim 6, wherein the apparatus further comprises:
and the image acquisition module is used for acquiring images.
12. A computer device comprising a processor and a memory, the memory storing a computer program, the processor being configured to execute the program stored in the memory to perform the steps of the method of any one of claims 1 to 5.
CN202010364395.9A 2020-04-30 2020-04-30 Image display method and device and computer equipment Pending CN113592874A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010364395.9A CN113592874A (en) 2020-04-30 2020-04-30 Image display method and device and computer equipment
PCT/CN2021/089984 WO2021218926A1 (en) 2020-04-30 2021-04-26 Image display method and apparatus, and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010364395.9A CN113592874A (en) 2020-04-30 2020-04-30 Image display method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN113592874A true CN113592874A (en) 2021-11-02

Family

ID=78237286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010364395.9A Pending CN113592874A (en) 2020-04-30 2020-04-30 Image display method and device and computer equipment

Country Status (2)

Country Link
CN (1) CN113592874A (en)
WO (1) WO2021218926A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125286A (en) * 2021-11-18 2022-03-01 维沃移动通信有限公司 Shooting method and device thereof

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003178311A (en) * 2002-10-25 2003-06-27 Mitsubishi Electric Corp Real time facial expression tracking device
CN1908962A (en) * 2006-08-21 2007-02-07 北京中星微电子有限公司 People face track display method and system for real-time robust
CN101589419A (en) * 2006-12-18 2009-11-25 索尼株式会社 Image processing device, image processing method, and program
CN103458219A (en) * 2013-09-02 2013-12-18 小米科技有限责任公司 Method, device and terminal device for adjusting face in video call
US20140104313A1 (en) * 2011-06-10 2014-04-17 Panasonic Corporation Object detection frame display device and object detection frame display method
CN105357436A (en) * 2015-11-03 2016-02-24 广东欧珀移动通信有限公司 Image cropping method and system for image shooting
US20160358341A1 (en) * 2015-06-07 2016-12-08 Apple Inc. Fast Template-Based Tracking
US20170094184A1 (en) * 2015-09-28 2017-03-30 Qualcomm Incorporated Systems and methods for performing automatic zoom
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN109034013A (en) * 2018-07-10 2018-12-18 腾讯科技(深圳)有限公司 A kind of facial image recognition method, device and storage medium
CN109089157A (en) * 2018-06-15 2018-12-25 广州华多网络科技有限公司 Method of cutting out, display equipment and the device of video pictures
CN109308469A (en) * 2018-09-21 2019-02-05 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109936703A (en) * 2019-02-26 2019-06-25 成都第二记忆科技有限公司 The method and apparatus that the video of monocular camera shooting is reconstructed
CN110189378A (en) * 2019-05-23 2019-08-30 北京奇艺世纪科技有限公司 A kind of method for processing video frequency, device and electronic equipment
CN110611787A (en) * 2019-06-10 2019-12-24 青岛海信电器股份有限公司 Display and image processing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10032067B2 (en) * 2016-05-28 2018-07-24 Samsung Electronics Co., Ltd. System and method for a unified architecture multi-task deep learning machine for object recognition
CN111145093A (en) * 2019-12-20 2020-05-12 北京五八信息技术有限公司 Image display method, image display device, electronic device, and storage medium
CN111583273A (en) * 2020-04-29 2020-08-25 京东方科技集团股份有限公司 Readable storage medium, display device and image processing method thereof

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003178311A (en) * 2002-10-25 2003-06-27 Mitsubishi Electric Corp Real time facial expression tracking device
CN1908962A (en) * 2006-08-21 2007-02-07 北京中星微电子有限公司 People face track display method and system for real-time robust
CN101589419A (en) * 2006-12-18 2009-11-25 索尼株式会社 Image processing device, image processing method, and program
US20140104313A1 (en) * 2011-06-10 2014-04-17 Panasonic Corporation Object detection frame display device and object detection frame display method
CN103458219A (en) * 2013-09-02 2013-12-18 小米科技有限责任公司 Method, device and terminal device for adjusting face in video call
US20160358341A1 (en) * 2015-06-07 2016-12-08 Apple Inc. Fast Template-Based Tracking
US20170094184A1 (en) * 2015-09-28 2017-03-30 Qualcomm Incorporated Systems and methods for performing automatic zoom
CN105357436A (en) * 2015-11-03 2016-02-24 广东欧珀移动通信有限公司 Image cropping method and system for image shooting
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN109089157A (en) * 2018-06-15 2018-12-25 广州华多网络科技有限公司 Method of cutting out, display equipment and the device of video pictures
CN109034013A (en) * 2018-07-10 2018-12-18 腾讯科技(深圳)有限公司 A kind of facial image recognition method, device and storage medium
CN109308469A (en) * 2018-09-21 2019-02-05 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109936703A (en) * 2019-02-26 2019-06-25 成都第二记忆科技有限公司 The method and apparatus that the video of monocular camera shooting is reconstructed
CN110189378A (en) * 2019-05-23 2019-08-30 北京奇艺世纪科技有限公司 A kind of method for processing video frequency, device and electronic equipment
CN110611787A (en) * 2019-06-10 2019-12-24 青岛海信电器股份有限公司 Display and image processing method

Also Published As

Publication number Publication date
WO2021218926A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN108965922B (en) Video cover generation method and device and storage medium
CN111382624A (en) Action recognition method, device, equipment and readable storage medium
CN109862412B (en) Method and device for video co-shooting and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110225390B (en) Video preview method, device, terminal and computer readable storage medium
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN112667835A (en) Work processing method and device, electronic equipment and storage medium
CN110941375A (en) Method and device for locally amplifying image and storage medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN111753606A (en) Intelligent model upgrading method and device
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN113160031A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111158575B (en) Method, device and equipment for terminal to execute processing and storage medium
CN109032492B (en) Song cutting method and device
CN111370096A (en) Interactive interface display method, device, equipment and storage medium
CN111860064A (en) Target detection method, device and equipment based on video and storage medium
WO2021218926A1 (en) Image display method and apparatus, and computer device
CN114594885A (en) Application icon management method, device and equipment and computer readable storage medium
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination