WO2021218926A1 - Procédé et appareil d'affichage d'image et dispositif informatique - Google Patents

Procédé et appareil d'affichage d'image et dispositif informatique Download PDF

Info

Publication number
WO2021218926A1
WO2021218926A1 PCT/CN2021/089984 CN2021089984W WO2021218926A1 WO 2021218926 A1 WO2021218926 A1 WO 2021218926A1 CN 2021089984 W CN2021089984 W CN 2021089984W WO 2021218926 A1 WO2021218926 A1 WO 2021218926A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
target
face
display
Prior art date
Application number
PCT/CN2021/089984
Other languages
English (en)
Chinese (zh)
Inventor
陈莹
何胜远
王梁
申川
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2021218926A1 publication Critical patent/WO2021218926A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the embodiments of the present application relate to the field of image processing technology, and in particular to an image display method, device, and computer equipment.
  • biometric access control systems based on artificial intelligence and deep learning have developed rapidly.
  • face recognition access control is more and more widely used in access control systems due to its fast speed, convenient traffic and good user experience.
  • the face recognition access control system is limited by the appearance of the device, and its preview interface for users to watch their face is often displayed as a fixed scene, that is, the image at a fixed position in the capture range of the camera is displayed in the preview. Interface.
  • the user must stand at the fixed position to see his face on the preview interface, which limits the display position of the face to a small range, which brings inconvenience to the user.
  • the embodiments of the present application provide an image display method, device, computer equipment, and storage medium, which can expand the position where the human face can be displayed.
  • an image display method includes:
  • the target image is displayed on the display interface.
  • the determining the face area in the image includes:
  • the area indicated by the largest detection frame in one or more detection frames in the image is taken as the face area in the image.
  • the determining the target area in the image according to the face area in the image includes:
  • a target area in the image is determined.
  • the determining the target area in the image according to the first area and the second area includes:
  • the distance between the position of the first area and the position of the second area is less than or equal to a preset distance, using the first area as the target area in the image;
  • the position of the second area is moved to the direction of the position of the first area by the preset distance .
  • the method further includes:
  • a target area in the image is determined.
  • an image display device includes:
  • the processor is configured to determine the face area in the acquired image; determine the target area in the image according to the face area in the image, and the size of the target area is the same as the size of the display interface; Describe the target area in the image to obtain the target image;
  • the display is used to display the target image on the display interface.
  • the processor is used to:
  • the area indicated by the largest detection frame in one or more detection frames in the image is taken as the face area in the image.
  • the processor is used to:
  • a target area in the image is determined.
  • the processor is used to:
  • the distance between the position of the first area and the position of the second area is less than or equal to a preset distance, using the first area as the target area in the image;
  • the position of the second area is moved to the direction of the position of the first area by the preset distance .
  • the processor is further configured to:
  • a target area in the image is determined.
  • the device further includes:
  • the image acquisition module is used to acquire images.
  • an image display device includes:
  • Image acquisition module for acquiring images
  • the first determining module is used to determine the face area in the image
  • the second determining module is configured to determine a target area in the image according to the face area in the image, and the size of the target area is the same as the size of the display interface;
  • the cropping module is used to crop out the target area in the image to obtain the target image
  • the display module is used to display the target image on the display interface.
  • the first determining module is used to:
  • the area indicated by the largest detection frame in one or more detection frames in the image is taken as the face area in the image.
  • the second determining module is used to:
  • a target area in the image is determined.
  • the second determining module is used to:
  • the distance between the position of the first area and the position of the second area is less than or equal to a preset distance, using the first area as the target area in the image;
  • the position of the second area is moved to the direction of the position of the first area by the preset distance .
  • the device further includes:
  • the third determining module is configured to, if there is no face area in the image, use an area in the image with the same size as the size of the display interface and the same center point as the center point of the image as the first area ;
  • a fourth determining module configured to use the target area in the last frame of image acquired before the image is acquired as the second area
  • the fifth determining module is configured to determine a target area in the image according to the first area and the second area.
  • a computer device in one aspect, includes a processor and a memory, the memory is used to store a computer program, and the processor is used to execute the program stored in the memory to realize the above-mentioned image display Method steps.
  • a computer-readable storage medium is provided, and a computer program is stored in the storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned image display method are realized.
  • a computer program product containing instructions which when run on a computer, causes the computer to execute the steps of the above-mentioned image display method.
  • the embodiment of the present application expands the position range that can realize the face display, improves the flexibility of the face display, and can greatly improve the user. Experience.
  • FIG. 1 is a flowchart of an image display method provided by an embodiment of the present application
  • FIG. 2 is a flowchart of another image display method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a face area and a first area provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of a first area, a second area, and a target area provided by an embodiment of the present application;
  • FIG. 5 is a flowchart of another image display method provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an image display device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another image display device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • Fig. 9 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • Fig. 1 is a flowchart of an image display method provided by an embodiment of the present application. Referring to Figure 1, the method includes the following steps:
  • Step 101 Obtain an image, and determine the face area in the image.
  • Step 102 Determine a target area in the image according to the face area in the image, and the size of the target area is the same as the size of the display interface.
  • Step 103 Cut out the target area in the image to obtain the target image.
  • Step 104 Display the target image on the display interface.
  • the face area in the image is determined.
  • the target area in the image is determined according to the face area in the image, and then the target area in the image is cut out to obtain the target image.
  • the target image is displayed on the display interface.
  • the embodiment of the present application expands the position range that can realize the face display, improves the flexibility of the face display, and can greatly improve the user. Experience.
  • determining the face area in the image includes:
  • the area indicated by the largest detection frame in one or more detection frames in the image is taken as the face area in the image.
  • determining the target area in the image according to the face area in the image includes:
  • the target area in the image is determined.
  • determining the target area in the image according to the first area and the second area includes:
  • the first area is taken as the target area in the image
  • the position of the second area is moved to the direction of the position of the first area by the preset distance to obtain the target position;
  • the area with the same position as the target position is regarded as the target area in the image.
  • the method further includes:
  • the target area in the image is determined.
  • Fig. 2 is a flowchart of an image display method provided by an embodiment of the present application. Referring to Figure 2, the method includes the following steps.
  • Step 201 Obtain an image, and determine a target area in the image.
  • one frame of image can be acquired every time the camera acquires a frame of image during the image acquisition process.
  • the camera can collect images in real time.
  • the camera may be a camera used in an access control device.
  • the image collected by the camera can be used for face recognition.
  • the access control device can open the door when the face recognition is successful, and maintain the door when the face recognition fails.
  • the image collected by the camera can be previewed by the user, that is, can be used for display on the display screen of the access control device, and can be specifically displayed according to the image display method provided in the embodiment of the present application.
  • the image display method provided in the embodiments of the present application can be applied not only to access control devices, but also to other devices with image display requirements, such as payment devices, which are not limited in the embodiments of the present application.
  • the size of the target area is the same as the size of the display interface. In other words, the width and height of the target area are consistent with the screen display resolution. In this way, the image of the target area can be directly displayed on the display interface.
  • the target area in the image can be determined, so that the subsequent image preview in the display interface can be realized accordingly. That is, the embodiment of the present application can realize real-time preview of the image collected by the camera during the image collection process.
  • the target area in the image when determining the target area in the image, if there is a face area in the image, the target area in the image can be determined according to the following first possible way; if there is no face area in the image, you can Determine the target area in the image according to the second possible way as follows.
  • the first possible way determine the face area in the image; determine the target area in the image according to the face area in the image.
  • the face area in the image is an area where a face exists in the image.
  • one or more detection frames in the image can be determined, and the area indicated by the largest detection frame in the one or more detection frames in the image is taken as the area in the image Of the face area.
  • the detection frame contains a human face, and the detection frame is used to indicate an area where the human face exists.
  • the area can usually be a rectangular area, so the size of the detection frame can usually include the width and height of the detection frame.
  • the face contained in the largest detection frame in one or more detection frames in the image is most likely to be the face of the person currently using the camera, so one or more of the The area indicated by the largest detection frame in the detection frame is taken as the face area in the image.
  • the area indicated by the widest or highest detection frame in one or more detection frames in the image can be used as the face area in the image.
  • the image can be input to a face detection model, and the face detection model outputs the position of each detection frame in the image.
  • the position of the detection frame may include the size and position of the detection frame (such as the upper left corner point). , The lower left corner point, the upper right corner point, the lower right corner point or the center point) coordinates.
  • one or more detection frames in the image can also be determined in other ways, which is not limited in the embodiment of the present application.
  • the face detection model may be a pre-trained model that can determine the detection frame where the face appears in the image.
  • the face detection model may be CNN (Convolutional Neural Network, convolutional neural network).
  • the coordinates of the position point included in the position of the detection frame are not the coordinates of the center point
  • the area indicated by the largest detection frame in one or more detection frames in the image is regarded as the person in the image.
  • the coordinates of the center point of the face area can also be determined according to the size of the face area and the coordinates of the location point.
  • the operation of determining the target area in the image may be: the size in the image is the same as the size of the display interface, and the center point is the same as the center point of the face area in the image As the target area in the image.
  • the area in the image whose size is the same as the size of the display interface and the center point is the same as the center point of the face area is regarded as the first area; the target area in the last frame of the image captured before the image is captured As the second area; according to the first area and the second area, determine the target area in the image.
  • the boundary of the first area in the image is obtained by expanding or shrinking the boundary of the face area in the image in the image. That is to say, when the size of the face area in the image is smaller than the size of the display interface, the boundary of the first area in the image is expanded by the boundary of the face area in the image. Obtained; when the size of the face area in the image is greater than the size of the display interface, the boundary of the first area in the image is obtained by shrinking the boundary of the face area in the image in the image.
  • the face area in the image can be directly used as the first area.
  • the size of the face area in the image is smaller than the size of the display interface, and the center point of the face area in the image is point M, the width of the display interface is a, and the height is b. Then you can take the center point of the face area in the image (ie M point) as the starting point, and expand the area boundary to one-half of the width of the display interface (ie 1/2 of a) in the width direction , In the height direction, expand the area boundary to one-half of the height of the display interface (that is, 1/2 of b), and the area enclosed by the expanded area boundary is the first area. At this time, the size of the first area in the image is the same as the size of the display interface, and the center point is the same as the center point of the face area in the image.
  • the boundary adjustment of the first region may be performed.
  • the boundary of the first area may be moved in the width direction and/or the height direction into the image until the boundary of the first area is exactly in the image.
  • the image is the image currently collected by the camera.
  • the image of the target area in the last frame of the image collected by the camera before the image is collected is the image currently displayed on the display interface. That is, the image of the second area is the image being displayed on the display interface when the camera collects the image.
  • the first area and the second area may be combined, that is, the area where the face is located in the current image and the area where the image being displayed on the display interface is located in the previous frame of the image. Determine the area in the current image where the image to be displayed on the display interface is located in the current image. In this way, it helps to achieve a smooth display of the image.
  • the operation of determining the target area in the image may be: if the distance between the position of the first area and the position of the second area is less than or equal to the preset distance, then the first area The area is used as the target area in the image. If the distance between the position of the first area and the position of the second area is greater than the preset distance, the position of the second area is moved to the direction of the position of the first area by the preset distance to obtain the target position; The same area as the target position is taken as the target area in the image.
  • the preset distance can be set according to actual usage requirements, and the preset distance can be set to be smaller.
  • the position of the first area and the position of the second area can both be indicated by corresponding position points. That is, the distance between the position of the first area and the position of the second area can be measured by the corresponding position point in the first area and the second area, and the position point may be a corner point or a center point.
  • the distance between the upper left corner point of the first area and the upper left corner point of the second area can be taken as the distance between the position of the first area and the position of the second area; or, the lower left corner of the first area can be The distance between the point and the lower left corner point of the second area is taken as the distance between the position of the first area and the position of the second area; alternatively, the upper right corner point of the first area and the upper right corner point of the second area The distance between the first area and the second area is the distance between the position of the first area and the position of the second area; alternatively, the distance between the lower right corner point of the first area and the lower right corner point of the second area can be taken as the distance between the position of the first area and the position of the second area.
  • the distance between the positions of the second area; or, the distance between the center point of the first area and the center point of the second area may be taken as the distance between the position of the first area and the position of the second area.
  • the image display of the second area is switched to The image display of the first area is relatively smooth, so the first area can be directly used as the target area in the image, so that the area where the human face is located in the current image can be directly displayed subsequently.
  • the distance between the position of the first area and the position of the second area is greater than the preset distance, it means that the position of the first area is farther away from the position of the second area.
  • the image display of the second area is switched to the first area.
  • the image display of the area is likely to show fluctuations in the display screen, so at this time, the first area is not used as the target area in the image, but the position of the second area is moved in the direction of the position of the first area.
  • Distance to get the location of the target area in the image That is to say, in this case, the position of the image being displayed on the display interface is approached to the position of the human face in the current image by a fixed step (ie, a preset distance), so that a smooth transition of the displayed image can be realized.
  • the operation of moving the position of the second area toward the position of the first area by a preset distance to obtain the target position can be realized by adjusting the position of the second area.
  • the position of the first area and the position of the second area are both indicated by the upper left corner point.
  • the coordinates of the upper left corner of the first area are (X 1 , Y 1 )
  • the coordinates of the upper left corner of the second area are (X 2 , Y 2 )
  • the preset distance is v
  • the difference between Y 1 and Y 2 is also greater than v.
  • the target position includes the size and the coordinates of the upper left corner point. This size is the size of the first area.
  • the upper left corner point coordinates are obtained by adjusting the coordinates of the upper left corner point of the second area, that is, the The coordinates of the upper left corner point are adjusted (X 2 , Y 2 ).
  • the abscissa X 2 of the upper left corner point included in the target position is X 2 +v; if X 2 -X 1 >v, then the abscissa of the upper left corner point included in the target position X 2 is X 2 -v. If Y 1 -Y 2 >v, the ordinate Y 2 of the upper left corner point included in the target position is Y 2 +v; if Y 2 -Y 1 >v, then the ordinate Y 2 of the upper left corner point included in the target position Y 2 is Y 2 -v.
  • the coordinates of the upper left corner of the first area are (X 1 , Y 1 ), the coordinates of the upper left corner of the second area are (X 2 , Y 2 ), and the preset distance is v.
  • the second area can be moved to the right by v, and the second area can be moved down by v, and the position of the moved second area is the position of the target area.
  • the second possible way if there is no face area in the image, then the area in the image with the same size as the size of the display interface and the same center point as the center point of the image is taken as the first area; The target area in the last frame of image collected before the image is taken as the second area; the target area in the image is determined according to the first area and the second area.
  • the first area in the image is the central area of the image
  • the central point of the central area of the image is the central point of the image
  • the central area of the image is the central point of the image
  • the size of is the size of the display interface.
  • the image is the image currently collected by the camera.
  • the image of the target area in the last frame of the image collected by the camera before the image is collected is the image currently displayed on the display interface. That is, the image of the second area is the image being displayed on the display interface when the camera collects the image.
  • the first area and the second area may be combined, that is, combined with the center area of the current image and the area where the image being displayed on the display interface is located in the previous frame of image, to determine the current image Is used for the area where the image displayed on the display interface is located in the current image. In this way, it helps to achieve a smooth display of the image.
  • the operation of determining the target area in the image may be: if the distance between the position of the first area and the position of the second area is less than or equal to the preset distance, then the first area The area is used as the target area in the image. If the distance between the position of the first area and the position of the second area is greater than the preset distance, the position of the second area is moved to the direction of the position of the first area by the preset distance to obtain the target position; The same area as the target position is taken as the target area in the image.
  • the preset distance can be set according to actual usage requirements, and the preset distance can be set to be smaller.
  • the position of the first area and the position of the second area can both be indicated by corresponding position points. That is, the distance between the position of the first area and the position of the second area can be measured by the corresponding position point in the first area and the second area, and the position point may be a corner point or a center point.
  • the image display of the second area is switched to The image display of the first area is relatively smooth, so the first area can be directly used as the target area in the image, so that the central area of the current image can be directly displayed subsequently.
  • the distance between the position of the first area and the position of the second area is greater than the preset distance, it means that the position of the first area is farther away from the position of the second area.
  • the image display of the second area is switched to the first area.
  • the image display of the area is likely to show fluctuations in the display screen, so at this time, the first area is not used as the target area in the image, but the position of the second area is moved in the direction of the position of the first area.
  • Distance to get the location of the target area in the image That is to say, in this case, the position of the image being displayed on the display interface is approached to the center position of the current image by a fixed step (ie, a preset distance), so that a smooth transition of the displayed image can be achieved.
  • Step 202 Cut out the target area in the image to obtain the target image.
  • the image located within the boundary of the target area in the image is the target image. Since the size of the target area is the same as the size of the display interface, the resolution of the target image is the same as the resolution of the display interface.
  • Step 203 Display the target image on the display interface.
  • the target image Since the resolution of the target image is the same as the resolution of the display interface, the target image can be displayed clearly and completely in the display interface, so that the preview image will not be distorted and the preview effect is better.
  • the face area in the image is determined.
  • the target area in the image is determined according to the face area in the image, and then the target area in the image is cut out to obtain the target image.
  • the target image is displayed on the display interface.
  • the embodiment of the present application expands the position range that can realize the face display, improves the flexibility of the face display, and can greatly improve the user. Experience.
  • the background and position range of the captured image are also fixed.
  • the face area can be detected first, and then the new preview area is gradually approached to the detected face area, so as to ensure that the previewed target image can contain the face as much as possible, and it can also generate A preview effect similar to tracking human faces, which can significantly improve the visual effect and user experience.
  • the image display method provided by the embodiment of the present application will be described with reference to FIG. 5 as an example.
  • the image display can be implemented according to the following steps.
  • Step 501 Detect whether there is a human face in the image. If there is a human face, continue to perform the following steps 502-504; if there is no human face, continue to perform the following steps 505-508.
  • Step 502 Determine the face area in the image.
  • Step 503 A region in the image with the same size as the size of the display interface and the same center point as the center point of the face region in the image is taken as the first region.
  • Step 504 If the distance between the position of the first area and the position of the second area is greater than the preset distance, move the position of the second area to the position of the first area by the preset distance to obtain the target in the image The location of the area.
  • the second area is the target area in the last frame of image collected by the camera before the image is collected.
  • Step 505 Determine whether the target area in the previous frame of image is its central area; if it is, proceed to step 506; if not, proceed to step 507 to step 508.
  • the center point of the center area is the same as the image center point, and the size of the center area is the same as the size of the display interface.
  • Step 506 Use the central area of the image as the target area in the image.
  • Step 507 Use the central area of the image as the first area.
  • Step 508 If the distance between the position of the first area and the position of the second area is greater than the preset distance, move the position of the second area to the position of the first area by the preset distance to obtain the target in the image The location of the area.
  • step 506 After the target area in the image is obtained through step 504, step 506, or step 508, the following steps 509 to 510 can be performed to realize image display.
  • Step 509 Cut out the target area in the image to obtain the target image.
  • Step 510 Display the target image on the display interface.
  • FIG. 6 is a schematic structural diagram of an image display device provided by an embodiment of the present application. Referring to Figure 6, the device includes:
  • the image acquisition module 601 is used to acquire an image
  • the first determining module 602 is configured to determine the face area in the image
  • the second determining module 603 is configured to determine a target area in the image according to the face area in the image, and the size of the target area is the same as the size of the display interface;
  • the cropping module 604 is used to crop out the target area in the image to obtain the target image
  • the display module 605 is used to display the target image on the display interface.
  • the first determining module 602 is used to:
  • the area indicated by the largest detection frame in one or more detection frames in the image is taken as the face area in the image.
  • the second determining module 603 is used to:
  • the target area in the image is determined.
  • the second determining module 603 is used to:
  • the first area is taken as the target area in the image
  • the position of the second area is moved to the direction of the position of the first area by the preset distance to obtain the target position;
  • the area with the same position as the target position is regarded as the target area in the image.
  • the device further includes:
  • the third determining module is configured to, if there is no face area in the image, use an area in the image with the same size as the size of the display interface and the same center point as the center point of the image as the first area;
  • the fourth determining module is configured to use the target area in the last frame of image acquired before the image is acquired as the second area;
  • the fifth determining module is used to determine the target area in the image according to the first area and the second area.
  • an image is acquired, and the face area in the image is determined. Then, the target area in the image is determined according to the face area in the image, and then the target area in the image is cut out to obtain the target image. Finally, the target image is displayed on the display interface. In this way, as long as the user is within the image acquisition range, the target image displayed on the display interface will contain the user's face as much as possible.
  • the embodiment of the present application expands the position range that can realize the face display, improves the flexibility of the face display, and can greatly improve the user. Experience.
  • FIG. 7 is a schematic structural diagram of an image display device provided by an embodiment of the present application. Referring to Figure 7, the device includes:
  • the processor 701 is configured to determine the face area in the acquired image; determine the target area in the image according to the face area in the image, and the size of the target area is the same as the size of the display interface; The target area, the target image is obtained;
  • the display 702 is used to display a target image on the display interface.
  • the processor 701 is configured to:
  • the area indicated by the largest detection frame in one or more detection frames in the image is taken as the face area in the image.
  • the processor 701 is configured to:
  • the target area in the image is determined.
  • the processor 701 is configured to:
  • the first area is taken as the target area in the image
  • the position of the second area is moved to the direction of the position of the first area by the preset distance to obtain the target position;
  • the same area as the target position is taken as the target area in the image.
  • the processor 701 is configured to:
  • the target area in the image is determined.
  • the device further includes:
  • the image acquisition module is used to acquire images.
  • the human face area is determined in the acquired image. Then, the target area in the image is determined according to the face area in the image, and then the target area in the image is cut out to obtain the target image. Finally, the target image is displayed on the display interface. In this way, as long as the user is within the image acquisition range, the target image displayed on the display interface will contain the user's face as much as possible.
  • the embodiment of the present application expands the position range that can realize the face display, improves the flexibility of the face display, and can greatly improve the user. Experience.
  • the image display device provided in the above embodiment displays images
  • only the division of the above functional modules is used as an example for illustration.
  • the above functions can be allocated by different functional modules as needed, i.e.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the image display device provided in the foregoing embodiment and the image display method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, and will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a terminal 800 provided by an embodiment of the present application.
  • the terminal 800 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a moving picture expert compresses standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, a moving picture expert compresses a standard audio layer) 4) Player, laptop or desktop computer.
  • the terminal 800 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names.
  • the terminal 800 includes a processor 801 and a memory 802.
  • the processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 801 may adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). accomplish.
  • the processor 801 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 801 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 801 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 802 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 802 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 802 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 801 to implement the method provided in the embodiment of the present application. The operation performed in the image display method.
  • the terminal 800 may optionally further include: a peripheral device interface 803 and at least one peripheral device.
  • the processor 801, the memory 802, and the peripheral device interface 803 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 803 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 804, a display screen 805, a camera component 806, an audio circuit 807, a positioning component 808, and a power supply 809.
  • the peripheral device interface 803 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 801 and the memory 802.
  • the processor 801, the memory 802, and the peripheral device interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 801, the memory 802, and the peripheral device interface 803 or The two can be implemented on separate chips or circuit boards, which are not limited in the embodiment of the present application.
  • the radio frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 804 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and the like.
  • the radio frequency circuit 804 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 804 may also include a circuit related to NFC (Near Field Communication), which is not limited in the embodiment of the present application.
  • the display screen 805 is used to display UI (User Interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 805 also has the ability to collect touch signals on or above the surface of the display screen 805.
  • the touch signal can be input to the processor 801 as a control signal for processing.
  • the display screen 805 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 805 there may be one display screen 805, which is arranged on the front panel of the terminal 800; in other embodiments, there may be at least two display screens 805, which are respectively arranged on different surfaces of the terminal 800 or in a folded design; In some other embodiments, the display screen 805 may be a flexible display screen, which is disposed on the curved surface or the folding surface of the terminal 800. Furthermore, the display screen 805 can also be set as a non-rectangular irregular pattern, that is, a special-shaped screen.
  • the display screen 805 may be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
  • the camera assembly 806 is used to capture images or videos.
  • the camera assembly 806 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • the camera assembly 806 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 807 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals to be input to the processor 801 for processing, or input to the radio frequency circuit 804 to implement voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively set in different parts of the terminal 800.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 801 or the radio frequency circuit 804 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the audio circuit 807 may also include a headphone jack.
  • the positioning component 808 is used to locate the current geographic location of the terminal 800 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 808 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • the power supply 809 is used to supply power to various components in the terminal 800.
  • the power source 809 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 800 further includes one or more sensors 810.
  • the one or more sensors 810 include, but are not limited to: an acceleration sensor 811, a gyroscope sensor 812, a pressure sensor 813, a fingerprint sensor 814, an optical sensor 815, and a proximity sensor 816.
  • the acceleration sensor 811 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 800.
  • the acceleration sensor 811 can be used to detect the components of the gravitational acceleration on three coordinate axes.
  • the processor 801 may control the display screen 805 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 811.
  • the acceleration sensor 811 may also be used for the collection of game or user motion data.
  • the gyroscope sensor 812 can detect the body direction and rotation angle of the terminal 800, and the gyroscope sensor 812 can cooperate with the acceleration sensor 811 to collect the user's 3D actions on the terminal 800.
  • the processor 801 can implement the following functions according to the data collected by the gyroscope sensor 812: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 813 may be disposed on the side frame of the terminal 800 and/or the lower layer of the display screen 805.
  • the processor 801 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 813.
  • the processor 801 controls the operability controls on the UI interface according to the pressure operation of the user on the display screen 805.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 814 is used to collect the user's fingerprint.
  • the processor 801 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the user's identity according to the collected fingerprint. When it is recognized that the user's identity is a trusted identity, the processor 801 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 814 may be provided on the front, back, or side of the terminal 800. When a physical button or a manufacturer logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the manufacturer logo.
  • the optical sensor 815 is used to collect the ambient light intensity.
  • the processor 801 may control the display brightness of the display screen 805 according to the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the display screen 805 is increased; when the ambient light intensity is low, the display brightness of the display screen 805 is decreased.
  • the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 according to the ambient light intensity collected by the optical sensor 815.
  • the proximity sensor 816 is also called a distance sensor, and is usually arranged on the front panel of the terminal 800.
  • the proximity sensor 816 is used to collect the distance between the user and the front of the terminal 800.
  • the processor 801 controls the display screen 805 to switch from the on-screen state to the off-screen state; when the proximity sensor 816 detects When the distance between the user and the front of the terminal 800 gradually increases, the processor 801 controls the display screen 805 to switch from the rest screen state to the bright screen state.
  • FIG. 8 does not constitute a limitation on the terminal 800, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • FIG. 9 is a schematic structural diagram of a server 900 provided by an embodiment of the present application.
  • the server 900 may be a server in a background server cluster. Specifically:
  • the server 900 includes a CPU (Central Processing Unit) 901, a system memory 904 including RAM (Random Access Memory) 902 and ROM (Read-Only Memory) 903, and a system memory connected to it 904 and the system bus 905 of the central processing unit 901.
  • the server 900 also includes a basic I/O (Input/Output) system 906 that helps transfer information between various devices in the computer, and a large-capacity storage system 913, application programs 914, and other program modules 915.
  • the basic input/output system 906 includes a display 908 for displaying information and an input device 909 such as a mouse and a keyboard for the user to input information.
  • the display 908 and the input device 909 are both connected to the central processing unit 901 through the input/output controller 910 connected to the system bus 905.
  • the basic input/output system 906 may also include an input/output controller 910 for receiving and processing input from multiple other devices such as a keyboard, a mouse, or an electronic stylus.
  • the input/output controller 910 also provides output to a display screen, a printer, or other types of output devices.
  • the mass storage device 907 is connected to the central processing unit 901 through a mass storage controller (not shown) connected to the system bus 905.
  • the mass storage device 907 and its associated computer-readable medium provide non-volatile storage for the server 900. That is, the mass storage device 907 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory) drive.
  • Computer-readable media may include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented by any method or technology for storing information such as computer readable instructions, data structures, program modules, or other data.
  • Computer storage media include RAM, ROM, EPROM (Electrically Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash memory or other Solid-state storage technology, and includes CD-ROM, DVD (Digital Versatile Disc, Digital Versatile Disc) or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices.
  • the system memory 904 and the mass storage device 907 may be collectively referred to as memory.
  • the server 900 may also be connected to a remote computer on the network to run through a network such as the Internet. That is, the server 900 can be connected to the network 912 through the network interface unit 911 connected to the system bus 905, or in other words, the network interface unit 911 can also be used to connect to other types of networks or remote computer systems (not shown).
  • the foregoing memory also includes one or more programs, and one or more programs are stored in the memory and configured to be executed by the CPU.
  • the one or more programs include instructions for performing operations performed in the image display method provided by the method embodiment in the embodiment of the present application.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the image display method provided in the embodiment of FIG. 2 are implemented.
  • the computer-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • the computer-readable storage medium mentioned in the embodiment of the present application may be a non-volatile storage medium, in other words, it may be a non-transitory storage medium.
  • a computer program product containing instructions is also provided, which when running on a computer, causes the computer to execute the steps of the image display method provided in the embodiment of FIG. 2 above.

Abstract

Des modes de réalisation de la présente demande se rapportent au domaine technique du traitement d'image. Sont divulgués un procédé et un appareil d'affichage d'image, ainsi qu'un dispositif informatique. Le procédé comprend les étapes consistant à : obtenir une image et déterminer une zone de visage dans l'image ; déterminer une zone cible dans l'image en fonction de la zone de visage dans l'image, la taille de la zone cible étant la même que la taille d'une interface d'affichage ; recadrer la zone cible dans l'image pour obtenir une image cible ; et afficher l'image cible dans l'interface d'affichage. Dans les modes de réalisation de la présente demande, une fois qu'un utilisateur se trouve dans une plage d'acquisition d'image, le visage de l'utilisateur sera inclus autant que possible dans l'image cible affichée sur l'interface d'affichage, ce qui permet d'étendre la plage de positions à l'intérieur de laquelle un affichage de visage peut être mis en œuvre et d'améliorer la flexibilité de l'affichage de visage, améliorant ainsi considérablement l'expérience utilisateur.
PCT/CN2021/089984 2020-04-30 2021-04-26 Procédé et appareil d'affichage d'image et dispositif informatique WO2021218926A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010364395.9 2020-04-30
CN202010364395.9A CN113592874A (zh) 2020-04-30 2020-04-30 图像显示方法、装置和计算机设备

Publications (1)

Publication Number Publication Date
WO2021218926A1 true WO2021218926A1 (fr) 2021-11-04

Family

ID=78237286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089984 WO2021218926A1 (fr) 2020-04-30 2021-04-26 Procédé et appareil d'affichage d'image et dispositif informatique

Country Status (2)

Country Link
CN (1) CN113592874A (fr)
WO (1) WO2021218926A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125286A (zh) * 2021-11-18 2022-03-01 维沃移动通信有限公司 拍摄方法及其装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1908962A (zh) * 2006-08-21 2007-02-07 北京中星微电子有限公司 实时鲁棒的人脸追踪显示方法及系统
US20180307897A1 (en) * 2016-05-28 2018-10-25 Samsung Electronics Co., Ltd. System and method for a unified architecture multi-task deep learning machine for object recognition
CN109089157A (zh) * 2018-06-15 2018-12-25 广州华多网络科技有限公司 视频画面的裁剪方法、显示设备以及装置
CN110189378A (zh) * 2019-05-23 2019-08-30 北京奇艺世纪科技有限公司 一种视频处理方法、装置及电子设备
CN111145093A (zh) * 2019-12-20 2020-05-12 北京五八信息技术有限公司 图像显示方法、装置、电子设备及存储介质
CN111583273A (zh) * 2020-04-29 2020-08-25 京东方科技集团股份有限公司 可读存储介质、显示装置及其图像处理方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3788969B2 (ja) * 2002-10-25 2006-06-21 三菱電機株式会社 リアルタイム表情追跡装置
TWI394121B (zh) * 2006-12-18 2013-04-21 Sony Corp An image processing apparatus, an image processing method, and a recording medium
JP5923746B2 (ja) * 2011-06-10 2016-05-25 パナソニックIpマネジメント株式会社 物体検出枠表示装置及び物体検出枠表示方法
CN103458219A (zh) * 2013-09-02 2013-12-18 小米科技有限责任公司 一种视频通话面部调整方法、装置及终端设备
US9773192B2 (en) * 2015-06-07 2017-09-26 Apple Inc. Fast template-based tracking
US9781350B2 (en) * 2015-09-28 2017-10-03 Qualcomm Incorporated Systems and methods for performing automatic zoom
CN105357436B (zh) * 2015-11-03 2018-07-03 广东欧珀移动通信有限公司 用于图像拍摄中的图像裁剪方法和系统
CN107786812A (zh) * 2017-10-31 2018-03-09 维沃移动通信有限公司 一种拍摄方法、移动终端及计算机可读存储介质
CN109034013B (zh) * 2018-07-10 2023-06-13 腾讯科技(深圳)有限公司 一种人脸图像识别方法、装置及存储介质
CN109308469B (zh) * 2018-09-21 2019-12-10 北京字节跳动网络技术有限公司 用于生成信息的方法和装置
CN109936703A (zh) * 2019-02-26 2019-06-25 成都第二记忆科技有限公司 对单目相机拍摄的视频进行重构的方法和装置
CN110611787B (zh) * 2019-06-10 2021-05-28 海信视像科技股份有限公司 一种显示器及图像处理方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1908962A (zh) * 2006-08-21 2007-02-07 北京中星微电子有限公司 实时鲁棒的人脸追踪显示方法及系统
US20180307897A1 (en) * 2016-05-28 2018-10-25 Samsung Electronics Co., Ltd. System and method for a unified architecture multi-task deep learning machine for object recognition
CN109089157A (zh) * 2018-06-15 2018-12-25 广州华多网络科技有限公司 视频画面的裁剪方法、显示设备以及装置
CN110189378A (zh) * 2019-05-23 2019-08-30 北京奇艺世纪科技有限公司 一种视频处理方法、装置及电子设备
CN111145093A (zh) * 2019-12-20 2020-05-12 北京五八信息技术有限公司 图像显示方法、装置、电子设备及存储介质
CN111583273A (zh) * 2020-04-29 2020-08-25 京东方科技集团股份有限公司 可读存储介质、显示装置及其图像处理方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125286A (zh) * 2021-11-18 2022-03-01 维沃移动通信有限公司 拍摄方法及其装置

Also Published As

Publication number Publication date
CN113592874A (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2021008456A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support d'informations
CN109712224B (zh) 虚拟场景的渲染方法、装置及智能设备
US11517099B2 (en) Method for processing images, electronic device, and storage medium
US11978219B2 (en) Method and device for determining motion information of image feature point, and task performing method and device
US11962897B2 (en) Camera movement control method and apparatus, device, and storage medium
CN109166150B (zh) 获取位姿的方法、装置存储介质
CN109862412B (zh) 合拍视频的方法、装置及存储介质
CN109922356B (zh) 视频推荐方法、装置和计算机可读存储介质
WO2022134632A1 (fr) Procédé et appareil de traitement de travail
CN109886208B (zh) 物体检测的方法、装置、计算机设备及存储介质
CN111754386B (zh) 图像区域屏蔽方法、装置、设备及存储介质
CN112565806B (zh) 虚拟礼物赠送方法、装置、计算机设备及介质
WO2021238564A1 (fr) Dispositif d'affichage et procédé, appareil et système de détermination de paramètre de distorsion associés, ainsi que support d'informations
CN108845777B (zh) 播放帧动画的方法和装置
WO2021027890A1 (fr) Procédé et dispositif de production d'image de plaque d'immatriculation, et support de stockage informatique
US11720219B2 (en) Method, apparatus and device for displaying lyric, and storage medium
CN111385525B (zh) 视频监控方法、装置、终端及系统
CN108664300B (zh) 一种画中画模式下的应用界面显示方法及装置
CN111158575B (zh) 终端执行处理的方法、装置、设备以及存储介质
WO2021218926A1 (fr) Procédé et appareil d'affichage d'image et dispositif informatique
CN114594885A (zh) 应用图标的管理方法、装置、设备及计算机可读存储介质
CN113259772B (zh) 弹幕处理方法、系统、设备和存储介质
CN113065457B (zh) 人脸检测点处理方法、装置、计算机设备及存储介质
CN110660031B (zh) 图像锐化方法及装置、存储介质
CN111381765B (zh) 文本框的显示方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21796389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21796389

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21796389

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21796389

Country of ref document: EP

Kind code of ref document: A1