CN112967299A - Image cropping method and device, electronic equipment and computer readable medium - Google Patents

Image cropping method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112967299A
CN112967299A CN202110537060.7A CN202110537060A CN112967299A CN 112967299 A CN112967299 A CN 112967299A CN 202110537060 A CN202110537060 A CN 202110537060A CN 112967299 A CN112967299 A CN 112967299A
Authority
CN
China
Prior art keywords
information
image
iris
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110537060.7A
Other languages
Chinese (zh)
Other versions
CN112967299B (en
Inventor
张玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuanxin Intellectual Property Service Co.,Ltd.
Original Assignee
Beijing Missfresh Ecommerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Missfresh Ecommerce Co Ltd filed Critical Beijing Missfresh Ecommerce Co Ltd
Priority to CN202110537060.7A priority Critical patent/CN112967299B/en
Publication of CN112967299A publication Critical patent/CN112967299A/en
Application granted granted Critical
Publication of CN112967299B publication Critical patent/CN112967299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses an image cropping method and device, electronic equipment and a computer readable medium. One embodiment of the method comprises: in response to the fact that the target user opens the target interface and watches the target interface, a front-facing camera installed on the target terminal is controlled to shoot a video with preset duration to serve as a candidate video; determining iris information corresponding to each frame of image included in the candidate video to obtain an iris information set; determining the fixation point information based on each iris information in the iris information set to obtain a fixation point information set; determining a gazing area according to the gazing point information set; and according to the watching area, cutting the article display image displayed in the target interface to generate a cut article display image. This embodiment improves the information presentation efficiency.

Description

Image cropping method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an image cropping method, an image cropping device, electronic equipment and a computer readable medium.
Background
As intelligent terminals are widely popularized. More and more users begin to use intelligent terminals for information display. The image is used as a medium for information transmission and display, and has direct information display capability. Therefore, more and more users begin to present image information through smart terminals.
However, when the image is used for information display, the following technical problems often exist:
firstly, as the screen size of the intelligent terminal is often fixed, when an article for information display through an image contains more details, the detailed information cannot be visually displayed for a user, and the information display efficiency is low;
secondly, the image cannot be dynamically cut according to the detail information that the user wants to browse, the information display is not flexible enough and the information display efficiency is low.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose image cropping methods, apparatuses, electronic devices and computer readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an image cropping method, including: responding to the fact that a target user opens a target interface and watches the target interface, and controlling a front-facing camera installed on a target terminal to shoot a video with preset duration as a candidate video, wherein the candidate video is a video recorded with the face of the target user; determining iris information corresponding to each frame of image included in the candidate video to obtain an iris information set, wherein the iris information in the iris information set comprises: iris position information of the target user and gaze direction information of the target user; determining fixation point information based on each iris information in the iris information set to obtain a fixation point information set, wherein the fixation point information in the fixation point information set comprises: a fixation point coordinate; determining a gazing area according to the gazing point information set; and cutting the article display image displayed in the target interface according to the watching area to generate a cut article display image.
In some embodiments, the determining a gaze region from the set of gaze point information comprises:
according to the gazing point information set, determining the gazing area through the following formula:
Figure 199032DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 568702DEST_PATH_IMAGE002
represents an abscissa in the gaze point coordinates included in the gaze point information in the set of gaze point information,
Figure 20543DEST_PATH_IMAGE003
indicating a vertical coordinate in the gaze point coordinates comprised by the gaze point information in the set of gaze point information,
Figure 337124DEST_PATH_IMAGE004
which is indicative of a pre-set threshold value,
Figure 948234DEST_PATH_IMAGE005
represents an abscissa in the gaze point coordinates included in the 1 st gaze point information of the set of gaze point information,
Figure 442800DEST_PATH_IMAGE006
represents an abscissa in the gaze point coordinates included in the 2 nd gaze point information of the set of gaze point information,
Figure 365626DEST_PATH_IMAGE007
represents an abscissa in the gaze point coordinates included in the 3 rd gaze point information of the set of gaze point information,
Figure 361264DEST_PATH_IMAGE008
represents an abscissa in the gaze point coordinates included in the 4 th gaze point information of the set of gaze point information,
Figure 702246DEST_PATH_IMAGE009
indicating a vertical coordinate in the gaze point coordinates included in the 1 st gaze point information of the set of gaze point information,
Figure 616982DEST_PATH_IMAGE010
indicating a ordinate in the gaze point coordinates included in the 2 nd gaze point information of the set of gaze point information,
Figure 43415DEST_PATH_IMAGE011
indicating a ordinate in the gaze point coordinates included in the 3 rd gaze point information of the set of gaze point information,
Figure 577164DEST_PATH_IMAGE012
indicating a vertical coordinate in the gaze point coordinates included in the 4 th gaze point information of the set of gaze point information,
Figure 21921DEST_PATH_IMAGE013
the expression of the independent variable is shown,
Figure 123869DEST_PATH_IMAGE014
representing the dependent variable.
In a second aspect, some embodiments of the present disclosure provide an image cropping device, the device comprising: the control unit is configured to respond to the fact that a target user opens a target interface and watches the target interface, and control a front camera installed on a target terminal to shoot a video with a preset time length as a candidate video, wherein the candidate video is a video recorded with the face of the target user; a first determining unit, configured to determine iris information corresponding to each frame of image included in the candidate video, and obtain an iris information set, where the iris information in the iris information set includes: iris position information of the target user and gaze direction information of the target user; a second determining unit configured to determine gaze point information based on each iris information in the iris information set, resulting in a gaze point information set, wherein the gaze point information in the gaze point information set comprises: a fixation point coordinate; a third determination unit configured to determine a gaze area from the set of gaze point information; and the cutting unit is configured to cut the article display image displayed in the target interface according to the watching area so as to generate a cut article display image.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following beneficial effects: by the image clipping method of some embodiments of the present disclosure, information display efficiency is improved. Specifically, the reason why the information display efficiency is low is that: because the screen size of intelligent terminal often fixes, when the article that carries out information display through the image contains more detail, often unable audio-visually to show detailed information to the user. Based on this, in the image cropping method of some embodiments of the present disclosure, first, in response to determining that a target user opens a target interface and watches the target interface, a front-facing camera installed on a target terminal is controlled to shoot a video with a preset duration as a candidate video, where the candidate video is a video recorded with a face of the target user. In practical situations, when a user browses an image, detailed information which is interesting to the user in the image is often browsed comprehensively. Thus by recording the candidate video, the area of interest to the user in the image can be determined. Secondly, determining iris information corresponding to each frame of image included in the candidate video to obtain an iris information set, wherein the iris information in the iris information set comprises: iris position information of the target user and gaze direction information of the target user. Then, based on each iris information in the iris information set, determining gaze point information to obtain a gaze point information set, wherein the gaze point information in the gaze point information set includes: the gaze point coordinates. In practice, when the user views the image, the eyeball will rotate so that the area of interest to the user is directly in front of the user's line of sight. Thus, by determining the iris information, the region of interest to the user in the image can be accurately located. Furthermore, a gaze area is determined from the set of gaze point information. And finally, according to the watching area, cutting the article display image displayed in the target interface to generate a cut article display image. By the method, the image can be cut according to the information which is interesting to the user, and more detailed information which is interesting to the user is shown to the user under the condition that the screen size is fixed. The information display efficiency is greatly improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of one application scenario of an image cropping method of some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of an image cropping method according to the present disclosure;
FIG. 3 is a schematic view of an item information display interface;
FIG. 4 is a schematic illustration of a left eye image of a target user;
FIG. 5 is a schematic diagram of constructing connected regions from the gaze point information in a set of gaze point information;
FIG. 6 is a schematic illustration of a candidate image;
FIG. 7 is a flow diagram of further embodiments of an image cropping method according to the present disclosure;
FIG. 8 is a schematic diagram of a model structure of a pre-trained gaze point determination model;
FIG. 9 is a schematic view of an image to be displayed;
FIG. 10 is a schematic block diagram of some embodiments of an image cropping device according to the present disclosure;
FIG. 11 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of one application scenario of an image cropping method of some embodiments of the present disclosure.
In the application scenario of fig. 1, first, in response to determining that the target user 102 opens a target interface and watches the target interface, the computing device 101 may control a front-facing camera installed on a target terminal to capture a video with a preset duration as a candidate video 103, where the candidate video 103 is a video recorded with a face of the target user 102; next, the computing device 101 may determine iris information corresponding to each frame of image included in the candidate video 103, and obtain an iris information set 104, where the iris information in the iris information set 104 includes: iris position information of the target user 102 and gaze direction information of the target user 102; then, the computing device 101 may determine gaze point information based on each iris information in the iris information set 104 to obtain a gaze point information set 105, where the gaze point information in the gaze point information set 105 includes: a fixation point coordinate; further, the computing device 101 may determine a gaze region 106 from the set of gaze point information 105 described above; finally, the computing device 101 may crop the item display image 107 displayed in the target interface according to the gaze area 106 to generate a cropped item display image 108.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of an image cropping method according to the present disclosure is shown. The image cropping method comprises the following steps:
step 201, in response to determining that the target user opens the target interface and watches the target interface, controlling a front-facing camera installed on the target terminal to shoot a video with a preset duration as a candidate video.
In some embodiments, an executing body of an image cropping method (e.g., the computing device 101 shown in fig. 1) may control a front camera installed on a target terminal to capture a video of a preset duration as a candidate video in response to determining that a target user opens and gazes at a target interface. First, the execution body may determine whether the target interface is opened according to the status code of the target interface. Then, in response to determining that the target interface is opened, the execution subject may perform human eye detection on the target user through an Adaboost algorithm, and in response to determining that the target user gazes at the target interface, control a front-facing camera installed on the target terminal to shoot a video with a predicted preset duration as a candidate video. The candidate video may be a video recorded with the face of the target user. The target terminal may be a terminal in which the target user account is registered.
As an example, the status code may be "200". Wherein "200" may characterize that the target interface is opened. The preset time period may be 4 seconds. The candidate video may include 4-frame images. Wherein, the execution subject may acquire 1 image every 1 second to generate the candidate video.
As yet another example, as shown in fig. 3, wherein fig. 3 may include: item information presentation interface 301. Item information presentation interface 301 may include: a target interface 302, an item characteristics display area 303, an item specification information display area 304, and an operations area 305. The item information display interface 301 may be an information display page of an item. The target interface 302 may be used to display an image of the appearance of an item. The item feature display area 303 may be used to display feature information such as price and model number of the item. The article specification information display area 304 may be used to display specification data of an article, for example, information such as article capacity and product size. The above-described operational area 305 may be used to expose value operational controls. For example, the value operation control may be a "button". The value operations may be a "join shopping cart" operation, a "favorite" operation, and a "buy" operation.
Step 202, determining iris information corresponding to each frame of image included in the candidate video to obtain an iris information set.
In some embodiments, the executing entity may determine iris information corresponding to each frame of image included in the candidate video, resulting in an iris information set. Wherein, the iris information in the iris information set may include: iris position information of the target user and gaze direction information of the target user. First, the execution subject may determine the iris of the target user included in each frame of image through a two-step location algorithm of Hough transform, so as to obtain the iris position information included in the iris information corresponding to the image. Then, gaze direction information included in the iris information is determined based on the iris position information.
As an example, the left eye image of the target user may be as shown in fig. 4. Fig. 4 includes a sclera image 401, an iris image 404, a pupil image 402 and a pupil center 403 of the left eye of the target user. The execution body may use coordinates corresponding to the pupil center 403 as the iris position information. The execution subject described above may first take the center point of the scleral image 401 as the starting point of the target vector. Then, the pupil center 403 is set as an end point of the target vector. And finally, taking the target vector as the gaze direction information.
Step 203, determining the gazing point information based on each iris information in the iris information set to obtain a gazing point information set.
In some embodiments, the performing subject determining the gaze point information based on each iris information in the set of iris information to obtain the set of gaze point information may include:
firstly, feature vector conversion is carried out on the iris information to generate feature vectors.
The execution subject can perform unique hot coding processing on the iris information to realize feature vector conversion on the iris information.
And secondly, inputting the characteristic vectors into a pre-trained target model to generate the fixation point information.
The target model may be a model for determining the target user's gaze point. The target model can be obtained by training through a training sample set. The training samples in the training sample set may include iris information of the user and corresponding gaze point coordinates. The gaze point information may represent coordinates of a location at which the target user gazes at the target interface.
As an example, the above-mentioned gazing point information may be [34, 12 ].
And step 204, determining a gazing area according to the gazing point information set.
In some embodiments, the determining the gazing region by the execution subject according to the gazing point information set may include:
firstly, constructing a connected region according to the gazing point information in the gazing point information set to obtain at least one connected region.
The connected region of the at least one connected region may be a region in which coordinate points corresponding to the respective gaze point information in the gaze point information set are sequentially connected.
As an example, as shown in fig. 5, wherein fig. 5 may include coordinate points 501 corresponding to 4 pieces of gaze point information. Different 3 connected regions can be generated.
And secondly, determining a communication area with the largest area in the at least one communication area as the watching area.
The execution body may determine the area of the connected region by determining the number of the pixels in the connected region in the at least one connected region.
Step 205, according to the watching region, clipping the article display image displayed in the target interface to generate a clipped article display image.
In some embodiments, the cropping the item display image displayed in the target interface by the execution subject to generate the cropped item display image may include the following steps:
step one, the subimages in the area of the attention in the article display image are determined as candidate images.
As an example, the candidate image may be as shown in fig. 6.
And secondly, amplifying the candidate images by the target multiple to generate an image to be presented.
As an example, the target multiple may be 0.75 times.
And thirdly, placing the image to be presented at the center of the blank image to generate the cut article display image.
The above embodiments of the present disclosure have the following beneficial effects: by the image clipping method of some embodiments of the present disclosure, information display efficiency is improved. Specifically, the reason why the information display efficiency is low is that: because the screen size of intelligent terminal often fixes, when the article that carries out information display through the image contains more detail, often unable audio-visually to show detailed information to the user. Based on this, in the image cropping method of some embodiments of the present disclosure, first, in response to determining that a target user opens a target interface and watches the target interface, a front-facing camera installed on a target terminal is controlled to shoot a video with a preset duration as a candidate video, where the candidate video is a video recorded with a face of the target user. In practical situations, when a user browses an image, detailed information which is interesting to the user in the image is often browsed comprehensively. Thus by recording the candidate video, the area of interest to the user in the image can be determined. Secondly, determining iris information corresponding to each frame of image included in the candidate video to obtain an iris information set, wherein the iris information in the iris information set comprises: iris position information of the target user and gaze direction information of the target user. Then, based on each iris information in the iris information set, determining gaze point information to obtain a gaze point information set, wherein the gaze point information in the gaze point information set includes: the gaze point coordinates. In practice, when the user views the image, the eyeball will rotate so that the area of interest to the user is directly in front of the user's line of sight. Thus, by determining the iris information, the region of interest to the user in the image can be accurately located. Furthermore, a gaze area is determined from the set of gaze point information. And finally, according to the watching area, cutting the article display image displayed in the target interface to generate a cut article display image. By the method, the image can be cut according to the information which is interesting to the user, and more detailed information which is interesting to the user is shown to the user under the condition that the screen size is fixed. The information display efficiency is greatly improved.
With further reference to FIG. 7, a flow 700 of further embodiments of an image cropping method is illustrated. The process 700 of the image cropping method comprises the following steps:
step 701, in response to determining that the target user opens the target interface and watches the target interface, controlling a front-facing camera installed on the target terminal to shoot a video with a preset duration as a candidate video.
In some embodiments, the specific implementation of step 701 and the technical effect thereof may refer to step 201 in those embodiments corresponding to fig. 2, and are not described herein again.
In step 702, local binarization processing is performed on the image to generate an image after the local binarization processing.
In some embodiments, the executing subject may perform local binarization processing on the eye image included in the image by using an OTSU (extra large solvent) algorithm to generate the image after the local binarization processing.
In some optional implementation manners of some embodiments, the performing the local binarization processing on the image by the performing body to generate the image after the local binarization processing may include the following steps:
first, a target area in the image is determined.
The target area may include an eyeball area of the target user. The execution subject may determine the target area through a YOLOv2 algorithm.
And secondly, performing binarization processing on the target area to generate an image after the local binarization processing.
The execution body may perform binarization processing on the target region through an adaptive threshold binarization algorithm to generate the image after the local binarization processing.
Step 703, performing image enhancement processing on the image after the local binarization processing to generate an image after the image enhancement processing.
The execution subject may perform image enhancement processing on the image after the local binarization processing by using an image enhancement algorithm to generate an image after the image enhancement processing. The image enhancement algorithm may include, but is not limited to, at least one of: histogram equalization algorithm, logarithmic image enhancement algorithm, exponential image enhancement algorithm, image enhancement algorithm based on Laplace operator and gamma transformation algorithm.
Step 704, determining the iris position of the target user in the image after the image enhancement processing by an iris positioning algorithm to generate iris position information included in the iris information.
In some embodiments, the executing entity may determine the iris position of the target user in the image after the image enhancement processing through an iris positioning algorithm to generate iris position information included in the iris information. The iris positioning algorithm may include, but is not limited to, any one of the following: the iris positioning method comprises a calculus positioning algorithm, an iris positioning algorithm based on sober operators, an iris positioning algorithm based on twit operators, an iris positioning algorithm based on log operators, an iris positioning algorithm based on canny operators and an iris positioning algorithm based on projection.
Step 705, determining eyeball center coordinates of the target user in the image after the image enhancement processing.
In some embodiments, the executing subject may first determine an eyeball contour of the target user in the image after the image enhancement processing by an edge detection algorithm. Then, the coordinates corresponding to the center point of the eyeball contour are determined as the eyeball center coordinates. The edge detection algorithm may be, but is not limited to, any of the following: sobel edge detection algorithm and edge detection algorithm based on laplacian.
Step 706, determining the gazing direction information included in the iris information according to the eyeball center coordinates and the iris position information.
In some embodiments, the execution subject may determine gaze direction information included in the iris information according to the eyeball center coordinates and the iris position information. The executing body may first use the eyeball center coordinates as a starting point of the vector. Then, the coordinates corresponding to the iris position information are set as the end point of the vector. And finally, determining the vector as the gaze direction information.
And step 707, determining the gazing point information based on each iris information in the iris information set to obtain a gazing point information set.
In some embodiments, the performing subject determining the gazing point information based on each iris information in the set of iris information may include:
first, iris position information included in the iris information is input to a first sub-network included in a pre-trained gaze point determination model to generate first feature information.
Wherein the first sub-network may include: a first convolutional layer, a second convolutional layer and a third convolutional layer, the first sub-network may adopt a linear rectification function as an activation function.
And a second step of inputting gaze direction information included in the iris information into a second sub-network included in the pre-trained gaze point determination model to generate second feature information.
Wherein the second sub-network may include: a fourth convolutional layer, a fifth convolutional layer and a sixth convolutional layer. The first sub-network and the second sub-network are parameter shared. The second sub-network may employ a linear rectification function as the activation function.
And thirdly, inputting the image after the local binarization processing into a third sub-network included in the pre-trained fixation point determination model to generate third characteristic information. Wherein the third sub-network may include: a seventh convolutional layer, an eighth convolutional layer, and a ninth convolutional layer.
And fourthly, splicing the first characteristic information, the second characteristic information and the third characteristic to generate fourth characteristic information.
As an example, the above-described first characteristic information may be "00010". The second characteristic information may be "00111". The third characteristic information may be '01000'. The fourth characteristic information may be "000100011101000".
And a fifth step of inputting the fourth feature information to a fully connected layer included in the previously trained gaze point determination model to generate the gaze point information.
As an example, a model structure diagram of the above-mentioned pre-trained gaze point determination model may be as shown in fig. 8. The pre-trained gaze point determination model may include: a first sub-network 801, a second sub-network 802, a third sub-network 803 and a fully connected layer 804. The first sub-network 801 may include: first, second, and third convolutional layers 8011, 8012, 8013. The second sub-network 802 may include: a fourth convolutional layer 8021, a fifth convolutional layer 8022, and a sixth convolutional layer 8023. The third sub-network may include: a seventh convolutional layer 8031, an eighth convolutional layer 8032, and a ninth convolutional layer 8033.
Step 708, determining a gazing area according to the gazing point information set.
In some embodiments, the execution body may determine the gaze region by sequentially determining an upper boundary, a lower boundary, a left boundary, and a right boundary according to the gaze point coordinates included in each piece of gaze point information in the gaze point information set.
In some optional implementations of some embodiments, the execution subject may determine the gaze region according to the set of gaze point information by:
Figure 286866DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 358727DEST_PATH_IMAGE002
and an abscissa indicating the abscissa of the gaze point coordinates included in the gaze point information set.
Figure 408723DEST_PATH_IMAGE003
Indicating the ordinate in the gaze point coordinate included in the gaze point information set.
Figure 275048DEST_PATH_IMAGE004
Representing a preset threshold.
Figure 659762DEST_PATH_IMAGE005
Represents an abscissa in the gaze point coordinates included in the 1 st gaze point information in the gaze point information set.
Figure 269735DEST_PATH_IMAGE006
Represents an abscissa in the gaze point coordinates included in the 2 nd gaze point information in the gaze point information set.
Figure 174237DEST_PATH_IMAGE007
Represents an abscissa in the gaze point coordinates included in the 3 rd gaze point information in the gaze point information set.
Figure 477042DEST_PATH_IMAGE008
Represents an abscissa in the gaze point coordinates included in the 4 th gaze point information in the gaze point information set.
Figure 83473DEST_PATH_IMAGE009
Indicating the ordinate in the gaze point coordinate included in the 1 st gaze point information in the gaze point information set.
Figure 497137DEST_PATH_IMAGE010
Indicating the ordinate in the gaze point coordinate included in the 2 nd gaze point information in the gaze point information set.
Figure 256145DEST_PATH_IMAGE011
Indicating the ordinate in the gaze point coordinate included in the 3 rd gaze point information in the gaze point information set.
Figure 729852DEST_PATH_IMAGE012
Indicating the ordinate in the gaze point coordinate included in the 4 th gaze point information in the gaze point information set.
Figure 823579DEST_PATH_IMAGE013
Representing an argument.
Figure 775354DEST_PATH_IMAGE014
Representing the dependent variable.
The gazing point information in the gazing point information set may be sorted from the upper left corner in a clockwise manner according to the gazing point coordinates included in the gazing point information, and the sorted information is the 1 st gazing point information, the 2 nd gazing point information, the 3 rd gazing point information and the 4 th gazing point information.
As an example, the preset threshold may be 10.
And 709, cutting the article display image displayed in the target interface according to the watching area to generate a cut article display image.
In some embodiments, the specific implementation of step 709 and the technical effect thereof may refer to step 205 in those embodiments corresponding to fig. 2, and are not described herein again.
And 710, performing fuzzification processing on the article display image to generate a fuzzified article display image.
In some embodiments, the execution subject may perform a pixelization blurring process on the article display image to generate the blurred article display image. The execution body may perform gaussian blurring on the article display image to generate the blurred article display image.
And 711, amplifying the cut article display image to generate a target image.
In some embodiments, the execution subject may perform an enlargement process on the cut article display image to generate the target image.
The execution body may enlarge the cut article display image by 5 times of pixels to generate the target image.
And 712, superposing the target image on the fuzzified article display image to generate an image to be displayed.
In some embodiments, the executing subject may superimpose the target image on the fuzzified article display image to generate an image to be displayed.
As an example, this may be as shown in fig. 9. Wherein, fig. 9 may include an image 903 to be displayed. The image 903 to be displayed may include: a target image 901 and a blurred article display image 902.
And 713, displaying the image to be displayed on the target terminal.
In some embodiments, the execution main body may push the image to be displayed to the target terminal in a wireless connection manner, and display the image on the target terminal.
As can be seen from fig. 7, compared with the description of some embodiments corresponding to fig. 2, the present disclosure first performs a local binarization process on an image to generate a locally binarized image. In order to ensure that the iris positioning algorithm can accurately extract the feature information of the eye image in the image, binarization processing is often required to be performed on the image, but in an actual situation, the iris positioning algorithm only needs to extract the feature information from the eye region included in the image, and binarization processing is performed on the whole image, so that the data processing amount is increased. Therefore, by performing the local binarization processing on the image, the data processing amount can be greatly reduced. Secondly, a fixation point information set is determined through a pre-trained fixation point determination model. The model structure of the pre-trained fixation point determination model is simple, so that the requirement on hardware is not high, and the method can be suitable for various mobile terminals. In addition, the pre-trained fixation point determining model can extract iris position information, fixation direction information and characteristic information in the image after local binarization processing in parallel, and the fixation point information generating speed is greatly improved. In addition, the gazing area can be properly amplified through the gazing area generating formula disclosed by the invention, so that information which is interested by a user can be contained in the gazing area, and the information display efficiency is improved.
With further reference to fig. 10, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of an image cropping device, which correspond to those illustrated in fig. 2, and which may be particularly applicable in various electronic devices. As shown in fig. 10, the image cropping device 1000 of some embodiments includes: a control unit 1001, a first determination unit 1002, a second determination unit 1003, a third determination unit 1004, and a clipping unit 1005. The control unit 1001 is configured to control a front camera installed on a target terminal to shoot a video with a preset duration as a candidate video in response to determining that a target user opens a target interface and watches the target interface, wherein the candidate video is a video recorded with a face of the target user; a first determining unit 1002, configured to determine iris information corresponding to each frame of image included in the candidate video, and obtain an iris information set, where the iris information in the iris information set includes: iris position information of the target user and gaze direction information of the target user; a second determining unit 1003, configured to determine, based on each iris information in the iris information set, gaze point information to obtain a gaze point information set, where the gaze point information in the gaze point information set includes: a fixation point coordinate; a third determination unit 1004 configured to determine a gaze region from the set of gaze point information; a cropping unit 1005 configured to crop the article display image displayed in the target interface according to the gaze area to generate a cropped article display image.
It will be understood that the units described in the apparatus 1000 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 1000 and the units included therein, and are not described herein again.
Referring now to FIG. 11, shown is a schematic block diagram of an electronic device (such as computing device 101 shown in FIG. 1) 1100 suitable for use in implementing some embodiments of the present disclosure. The electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 11, the electronic device 1100 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1101 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 1102 or a program loaded from a storage means 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the electronic device 1100 are also stored. The processing device 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
Generally, the following devices may be connected to the I/O interface 1105: input devices 1106 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 1107 including, for example, Liquid Crystal Displays (LCDs), speakers, vibrators, and the like; storage devices 1108, including, for example, magnetic tape, hard disk, etc.; and a communication device 1109. The communication means 1109 may allow the electronic device 1100 to communicate wirelessly or wiredly with other devices to exchange data. While fig. 11 illustrates an electronic device 1100 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 11 may represent one device or may represent a plurality of devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication device 1109, or installed from the storage device 1108, or installed from the ROM 1102. The computer program, when executed by the processing apparatus 1101, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to the fact that a target user opens a target interface and watches the target interface, and controlling a front-facing camera installed on a target terminal to shoot a video with preset duration as a candidate video, wherein the candidate video is a video recorded with the face of the target user;
determining iris information corresponding to each frame of image included in the candidate video to obtain an iris information set, wherein the iris information in the iris information set comprises: iris position information of the target user and gaze direction information of the target user; determining fixation point information based on each iris information in the iris information set to obtain a fixation point information set, wherein the fixation point information in the fixation point information set comprises: a fixation point coordinate; determining a gazing area according to the gazing point information set; and cutting the article display image displayed in the target interface according to the watching area to generate a cut article display image.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a control unit, a first determination unit, a second determination unit, a third determination unit, and a clipping unit. The names of these units do not constitute a limitation on the units themselves in some cases, and for example, the control unit may also be described as "a unit that controls a front camera mounted on a target terminal to capture a video of a preset duration as a candidate video in response to determining that a target user opens a target interface and looks at the target interface".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. An image cropping method, comprising:
responding to the fact that a target user opens a target interface and watches the target interface, and controlling a front-facing camera installed on a target terminal to shoot a video with preset duration as a candidate video, wherein the candidate video is a video recorded with the face of the target user;
determining iris information corresponding to each frame of image included in the candidate video to obtain an iris information set, wherein the iris information in the iris information set comprises: iris position information of the target user and gazing direction information of the target user;
determining fixation point information based on each iris information in the iris information set to obtain a fixation point information set, wherein the fixation point information in the fixation point information set comprises: a fixation point coordinate;
determining a gazing area according to the gazing point information set;
and according to the watching area, cutting the article display image displayed in the target interface to generate a cut article display image.
2. The method of claim 1, wherein the method further comprises:
performing fuzzification processing on the article display image to generate a fuzzified article display image;
amplifying the cut article display image to generate a target image;
the target image is superposed on the article display image after the fuzzification processing so as to generate an image to be displayed;
and displaying the image to be displayed on the target terminal.
3. The method of claim 1, wherein the determining iris information corresponding to each frame of image included in the candidate video comprises:
performing local binarization processing on the image to generate an image after the local binarization processing;
performing image enhancement processing on the image subjected to the local binarization processing to generate an image subjected to image enhancement processing;
and determining the iris position of the target user in the image after the image enhancement processing through an iris positioning algorithm so as to generate iris position information included in the iris information.
4. The method of claim 3, wherein the determining iris information corresponding to each frame of image included in the candidate video further comprises:
determining eyeball center coordinates of the target user in the image after the image enhancement processing;
and determining the gazing direction information included in the iris information according to the eyeball center coordinates and the iris position information.
5. The method according to claim 3, wherein the performing the local binarization processing on the image to generate a local binarization processed image comprises:
determining a target area in the image, wherein the target area is an area containing eyeballs of the target user;
and carrying out binarization processing on the target area to generate the image after the local binarization processing.
6. The method of claim 5, wherein the determining gaze point information based on each iris information in the set of iris information comprises:
inputting iris position information included in the iris information into a first sub-network included in a pre-trained gaze point determination model to generate first feature information, wherein the first sub-network includes: the first convolution layer, the second convolution layer and the third convolution layer, wherein the first sub-network adopts a linear rectification function as an activation function;
inputting gaze direction information included in the iris information into a second sub-network included in the pre-trained gaze point determination model to generate second feature information, wherein the second sub-network includes: a fourth convolution layer, a fifth convolution layer and a sixth convolution layer, wherein the first sub-network and the second sub-network are shared in parameter, and the second sub-network adopts a linear rectification function as an activation function;
inputting the image after the local binarization processing into a third sub-network included in the pre-trained fixation point determination model to generate third feature information, wherein the third sub-network includes: a seventh convolutional layer, an eighth convolutional layer and a ninth convolutional layer;
splicing the first characteristic information, the second characteristic information and the third characteristic to generate fourth characteristic information;
inputting the fourth feature information to a fully-connected layer included in the pre-trained gaze point determination model to generate the gaze point information.
7. The method of claim 6, wherein the determining a gaze region from the set of gaze point information comprises:
and sequentially determining an upper boundary, a lower boundary, a left boundary and a right boundary according to the fixation point coordinates included in each piece of fixation point information in the fixation point information set so as to determine the fixation area.
8. An image cropping device comprising:
the control unit is configured to control a front camera installed on a target terminal to shoot a video with a preset time length as a candidate video in response to the fact that a target user opens a target interface and watches the target interface, wherein the candidate video is a video recorded with the face of the target user;
a first determining unit, configured to determine iris information corresponding to each frame of image included in the candidate video, resulting in an iris information set, where the iris information in the iris information set includes: iris position information of the target user and gazing direction information of the target user;
a second determining unit configured to determine gaze point information based on each iris information in the set of iris information, resulting in a set of gaze point information, wherein the gaze point information in the set of gaze point information comprises: a fixation point coordinate;
a third determination unit configured to determine a gaze area from the set of gaze point information;
and the cutting unit is configured to cut the article display image displayed in the target interface according to the watching region so as to generate a cut article display image.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202110537060.7A 2021-05-18 2021-05-18 Image cropping method and device, electronic equipment and computer readable medium Active CN112967299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110537060.7A CN112967299B (en) 2021-05-18 2021-05-18 Image cropping method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110537060.7A CN112967299B (en) 2021-05-18 2021-05-18 Image cropping method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN112967299A true CN112967299A (en) 2021-06-15
CN112967299B CN112967299B (en) 2021-08-31

Family

ID=76279803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110537060.7A Active CN112967299B (en) 2021-05-18 2021-05-18 Image cropping method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112967299B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762266A (en) * 2021-09-01 2021-12-07 北京中星天视科技有限公司 Target detection method, device, electronic equipment and computer readable medium
CN115225926A (en) * 2022-06-27 2022-10-21 广州博冠信息科技有限公司 Game live broadcast picture processing method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293588A1 (en) * 2014-04-10 2015-10-15 Samsung Electronics Co., Ltd. Eye gaze tracking method and apparatus and computer-readable recording medium
CN110856035A (en) * 2018-07-24 2020-02-28 顶级公司 Processing image data to perform object detection
CN111443965A (en) * 2020-03-10 2020-07-24 Oppo广东移动通信有限公司 Picture display method and device, terminal and storage medium
CN112114659A (en) * 2019-06-19 2020-12-22 托比股份公司 Method and system for determining a fine point of regard for a user
CN112711984A (en) * 2020-12-09 2021-04-27 北京航空航天大学 Fixation point positioning method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293588A1 (en) * 2014-04-10 2015-10-15 Samsung Electronics Co., Ltd. Eye gaze tracking method and apparatus and computer-readable recording medium
CN110856035A (en) * 2018-07-24 2020-02-28 顶级公司 Processing image data to perform object detection
CN112114659A (en) * 2019-06-19 2020-12-22 托比股份公司 Method and system for determining a fine point of regard for a user
CN111443965A (en) * 2020-03-10 2020-07-24 Oppo广东移动通信有限公司 Picture display method and device, terminal and storage medium
CN112711984A (en) * 2020-12-09 2021-04-27 北京航空航天大学 Fixation point positioning method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOHN F. Y. BROOKFIELD: "Biased image cropping and non-independent samples", 《BMC BIOLOGY 2016》 *
毛云丰等: "基于深度神经网络的视线跟踪技术研究", 《现代电子技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762266A (en) * 2021-09-01 2021-12-07 北京中星天视科技有限公司 Target detection method, device, electronic equipment and computer readable medium
CN113762266B (en) * 2021-09-01 2024-04-26 北京中星天视科技有限公司 Target detection method, device, electronic equipment and computer readable medium
CN115225926A (en) * 2022-06-27 2022-10-21 广州博冠信息科技有限公司 Game live broadcast picture processing method and device, computer equipment and storage medium
CN115225926B (en) * 2022-06-27 2023-12-12 广州博冠信息科技有限公司 Game live broadcast picture processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112967299B (en) 2021-08-31

Similar Documents

Publication Publication Date Title
US20210158533A1 (en) Image processing method and apparatus, and storage medium
US11436863B2 (en) Method and apparatus for outputting data
CN110188719B (en) Target tracking method and device
CN111369427B (en) Image processing method, image processing device, readable medium and electronic equipment
US11443438B2 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN110136054B (en) Image processing method and device
CN110827378A (en) Virtual image generation method, device, terminal and storage medium
CN112967299B (en) Image cropping method and device, electronic equipment and computer readable medium
CN111368685A (en) Key point identification method and device, readable medium and electronic equipment
CN110784754A (en) Video display method and device and electronic equipment
CN113542902B (en) Video processing method and device, electronic equipment and storage medium
CN112988032B (en) Control display method and device and electronic equipment
CN111757100B (en) Method and device for determining camera motion variation, electronic equipment and medium
CN111314620B (en) Photographing method and apparatus
CN111784712A (en) Image processing method, device, equipment and computer readable medium
CN113033677A (en) Video classification method and device, electronic equipment and storage medium
CN112418249A (en) Mask image generation method and device, electronic equipment and computer readable medium
CN109977905B (en) Method and apparatus for processing fundus images
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
CN109949213B (en) Method and apparatus for generating image
CN110189364B (en) Method and device for generating information, and target tracking method and device
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111369475A (en) Method and apparatus for processing video
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN111258414A (en) Method and device for adjusting screen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231109

Address after: 518000 807, No. 121, Minsheng Avenue, Shangcun Community, Gongming Street, Guangming District, Shenzhen, Guangdong

Patentee after: Shenzhen Zhuanxin Intellectual Property Service Co.,Ltd.

Address before: 100102 room 076, no.1-302, 3 / F, commercial building, No.9 Wangjing street, Chaoyang District, Beijing

Patentee before: BEIJING MISSFRESH E-COMMERCE Co.,Ltd.