WO2020233201A1 - 图标位置确定方法和装置 - Google Patents

图标位置确定方法和装置 Download PDF

Info

Publication number
WO2020233201A1
WO2020233201A1 PCT/CN2020/078679 CN2020078679W WO2020233201A1 WO 2020233201 A1 WO2020233201 A1 WO 2020233201A1 CN 2020078679 W CN2020078679 W CN 2020078679W WO 2020233201 A1 WO2020233201 A1 WO 2020233201A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
salient
icon
target image
distance
Prior art date
Application number
PCT/CN2020/078679
Other languages
English (en)
French (fr)
Inventor
李马丁
郑云飞
章佳杰
宁小东
宋玉岩
于冰
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Priority to EP20809629.7A priority Critical patent/EP3974953A4/en
Publication of WO2020233201A1 publication Critical patent/WO2020233201A1/zh
Priority to US17/532,349 priority patent/US11574415B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits

Definitions

  • This application relates to the field of image processing technology, and in particular to a method and device for determining the position of an icon.
  • the title text, watermark and other icons of pictures or videos are generally placed in the middle of the picture or video screen or the upper and lower parts of the screen.
  • this application provides a method and device for determining the position of an icon.
  • a method for determining the position of an icon including: detecting a target object in a target image, and determining a reference position of the target object in the target image; The salient position of; the icon position is selected from the candidate positions according to the reference position or the distance between the salient position and the preset candidate position.
  • a device for determining the position of an icon comprising: a reference position determining module configured to perform detection of a target object in a target image, and determine the position of the target object in the target image A reference position; a salient position detection module configured to perform detection of salient positions in the target image; a position selection module configured to execute from the reference position or the distance between the salient position and the preset candidate position An icon position is selected from the candidate positions.
  • an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute the instructions to implement The icon position determination method as described in the first aspect.
  • a readable storage medium is provided.
  • the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can perform the icon position determination as described in the first aspect. method.
  • a computer program product is provided.
  • the instructions in the computer program product are executed by the processor of the electronic device, the electronic device can perform the icon position determination as described in the first aspect. method.
  • the salient position in the target image is detected, so as to obtain the more critical position in the target image.
  • the reference position of the object or object, and the salient position in the target image that may be paid more attention, and finally selected from the candidate positions according to the reference position or the distance between the salient position and the preset candidate position.
  • the position of the icon can prevent the icon from concealing the reference position and the salient position. This method does not require manual intervention at all, and the efficiency is high.
  • Fig. 1 is a flow chart showing a method for determining the position of an icon according to an exemplary embodiment
  • Fig. 2 is a flow chart showing another method for determining the position of an icon according to an exemplary embodiment
  • Fig. 3 is a schematic diagram of determining the position of an icon in a target image according to an exemplary embodiment
  • Fig. 4 is a block diagram showing a device for determining the position of an icon according to an exemplary embodiment
  • Fig. 5 is a block diagram showing another device for determining the position of an icon according to an exemplary embodiment
  • Fig. 6 is a block diagram showing an electronic device (general structure of a mobile terminal) according to an exemplary embodiment
  • Fig. 7 is a block diagram showing an electronic device (general structure of a server) according to an exemplary embodiment.
  • Fig. 1 is a flow chart showing a method for determining the position of an icon according to an exemplary embodiment. As shown in Fig. 1, the method for determining the position of an icon includes the following steps.
  • step S101 a target object in a target image is detected, and a reference position of the target object in the target image is determined.
  • the target image refers to an image to be added with an icon.
  • the target image may include a video image, a static image, etc., and the icon to be added to the target image may include caption text, floating pictures, watermarks, etc. opaque or semi-transparent. Transparent logo.
  • the target image generally includes a target object, and the target object can be any object included in the image, such as people, animals, objects, plants, and so on.
  • the Faster Regions with Convolutional Neural Networks (Faster-RCNN) algorithm, Single Shot MultiBox Detector (SSD) algorithm, etc. can be used to detect the target image.
  • the embodiment of this application does not specifically limit the detection method.
  • one or more rectangular boxes will be output according to the detection results to frame the target object. Select the center position of the rectangular frame, or the position corresponding to a feature of the object in the rectangular frame, and use it as the reference position of the target object.
  • step S102 a salient position in the target image is detected.
  • saliency detection can be performed on the target image.
  • Saliency detection is a technology that calculates the saliency of the image and generates a saliency map by analyzing the characteristics of image color, intensity, and direction.
  • the salient point of an image refers to the ability of pixels (or regions) in an image to be distinguished from other pixels (or regions) to attract visual attention. For example, if there is a black spot on a piece of white paper, the significance of the black spot is higher, and other places are lower.
  • the saliency map of the image is a two-dimensional image with the same size as the original image, where each pixel value represents the saliency of the corresponding point of the original image.
  • the saliency map can be used to guide the selection of the attention area and quickly locate the saliency area of the image.
  • the minimum barrier salient (MBS) algorithm can be used to detect the saliency of the target image.
  • step S103 an icon position is selected from the candidate positions according to the reference position or the distance between the salient position and the preset candidate position.
  • the preset candidate position is a predetermined floating position of the icon on the target image, and there are generally multiple candidate positions.
  • the candidate positions may include the exact center, upper center, lower center, left center, center right, upper left corner, upper right corner, lower left corner, and lower right corner of the target image.
  • the candidate position with the largest Euclidean distance can be selected as the icon placement position.
  • the salient position in the target image is detected, so as to obtain the more critical position in the target image.
  • the reference position of the object or object, and the salient position in the target image that may be paid more attention, and finally selected from the candidate positions according to the reference position or the distance between the salient position and the preset candidate position.
  • the position of the icon can prevent the icon from concealing the reference position and the salient position. This method does not require manual intervention at all, and the efficiency is high.
  • Fig. 2 is a flow chart showing another method for determining the position of an icon according to an exemplary embodiment, which is an optional embodiment of the method for determining the position of an icon in Fig. 1. As shown in Fig. 2, the method for determining the position of an icon includes The following steps.
  • step S201 at least one object included in the target image and the corresponding target area are identified.
  • the objects included in the target image are first identified. For example, face detection, cat and dog detection, object detection, and even more refined facial feature point detection are performed on the target image. According to the detection result, zero, one, or more rectangular boxes are generally output.
  • the rectangular box is The target area is used to frame the target object.
  • Fig. 3 is a schematic diagram of determining the position of an icon in a target image according to an exemplary embodiment.
  • Figure 3 there are little boy 01, little girl 02, and seesaw 03.
  • three objects in the target image are identified: little boy 01, little girl 02 and seesaw 03, and three corresponding target areas are output, which are the target area S1 corresponding to the face of little boy 01, little girl
  • the target area S2 corresponding to the face of 02 and the target area S3 corresponding to the pivot of the seesaw 03.
  • step S202 a target object is selected from the at least one object according to a preset rule.
  • the most critical object is selected according to a preset rule. For example, it can be to select the largest object among all objects, or one of the closest objects, or the smallest object.
  • step S202 includes the following step A1 or step A2:
  • Step A1 Determine the target object according to the position of the at least one object in the target image.
  • the location area of each object in the target image is determined, and the target object is determined according to the location area. For example, divide the location area into the exact center, the upper center, the lower center, the center left, the center right, the upper left corner, the upper right corner, the lower left corner, the lower right corner, etc., and then determine the object in a certain location area As the target. For example, in Fig. 3, the boy 01 located in the upper center position can be determined as the target object.
  • Step A2 Determine the target object according to the area ratio of the at least one object in the target image.
  • the target object can be determined according to the area proportion of the target area corresponding to each object.
  • the object with the largest area proportion or the object with the smallest area proportion can be determined as the target object.
  • the seesaw 03 with the largest area of the target area can be used as the target object.
  • step S203 the target position of the target area is used as a reference position according to the target object, and the target position includes: the center position of the target area and the position corresponding to the target feature contained in the target area.
  • the target position can be selected as the reference position from the target area corresponding to the target object according to a preset rule.
  • the preset rule may be to select the center position of the target area as the reference position, or to select other specific positions of the target area as the reference position.
  • the preset rule may be to select the position corresponding to the target feature contained in the target area, for example, select the position corresponding to the human eye in the target area, or the position corresponding to the human nose.
  • the specific objects included in the target feature are not specifically limited in the embodiments of the present application, and those skilled in the art can preset according to requirements.
  • the position S11 corresponding to the human nose in S1 is selected as the reference position.
  • the target area where the object in the target image is located is identified, and the target position is selected at the center position of the target area or the position corresponding to the target feature.
  • the target position is the position of the key part of the target object. Select the icon position of, you can avoid the above target position.
  • step S204 a grayscale image corresponding to the target image is acquired.
  • the salient position of the target image can be acquired.
  • the gray image corresponding to the target image can be obtained.
  • a grayscale image is an image with only one sampled color per pixel. This type of image is usually displayed in grayscale from the darkest black to the brightest white. Grayscale images are different from black and white images, and grayscale images have many levels of color depth between black and white.
  • the grayscale image can be obtained by measuring the brightness of each pixel in the target image in a single electromagnetic wave spectrum, such as visible light.
  • the purpose of obtaining the gray image corresponding to the target image is to detect the saliency of the target image, but it is generally believed that the target object in the image is more important than the saliency area in the image, so if the reference position is detected, there is no need to do it
  • the saliency detection means that there is no need to perform step S204 to step S206 in the embodiment of the present application.
  • step S205 a salient area is determined according to the gray information of different areas included in the gray image.
  • the grayscale image includes multiple grayscale areas, and each area has corresponding grayscale information.
  • the grayscale information indicates the color depth of the area. According to the grayscale information corresponding to each area, the grayscale image can be determined.
  • Significant area The salient point of an image refers to the ability of pixels (or regions) in an image to be distinguished from other pixels (or regions) to attract visual attention. Therefore, the area in the grayscale image with the largest grayscale value difference from other areas can be determined as the salient area. For example, if after transforming Figure 3 into a grayscale image, the grayscale value of the area S21 where the right eye of the little girl 02 is located is the largest difference from the grayscale values of other areas in the grayscale image, then the area S21 is determined as a salient area.
  • step S206 it is determined that the center point of the salient area obtains the salient position.
  • the point at the center position is selected from the salient area, and the point at the center position is regarded as the salient position.
  • the center point S22 of the area S21 may be determined as a prominent position.
  • the subsequent selection of the icon position can avoid the salient position to avoid the influence of the icon on the user's attention area in the target image.
  • step S207 if the reference position exists but the salient position does not exist, the candidate position with the largest distance from the reference position is selected to obtain the icon position.
  • the candidate positions may include the exact center, upper center, lower center, left center, center right, upper left corner, upper right corner, lower left corner, and lower right corner of the target image.
  • the above-mentioned distance may be Euclidean distance, that is, the distance between the center point of the reference position and the center point of the candidate position.
  • step S208 if the salient position exists but the reference position does not exist, the candidate position with the largest distance from the salient position is selected to obtain the icon position.
  • the object if the object cannot be detected in the target image, it means that the target image does not have a reference position, but the grayscale image corresponding to the target image has a salient position, and the candidate position with the largest distance from the salient position is selected as the icon position. Specifically, the distance between the center point of the salient position and the center point of each candidate position is calculated.
  • step S209 if the reference position exists, and the salient position exists, the candidate position with the reference position and the salient position are both greater than the preset distance threshold, and the candidate position with the largest average distance is selected to obtain the icon position
  • the average distance is an average value of the distances between the candidate position and the reference position and the salient position.
  • the first distance between the reference position and each candidate position and the second distance between the salient position and each candidate position can be calculated separately, and the first distance and For candidate positions whose second distances are all greater than the preset distance threshold, at least one candidate position is obtained, and for the at least one candidate position, the average value of the first distance and the second distance is calculated respectively to obtain at least one average distance, and the average is taken from it
  • the candidate position with the largest distance is used as the icon position.
  • the reference position exists and the salient position also exists, one of them can also be selected to determine the icon position according to the actual application. For example, the candidate position with the largest distance from the reference position is selected to obtain the icon position, or the candidate position with the largest distance from the salient position is selected to obtain the icon position.
  • the candidate position with the largest distance from the reference position, or the largest distance from the salient position, or the largest average distance from the reference position and the salient position is selected as the icon position, thereby making the icon float
  • the location is far away from the area where the key object in the target image is located, avoiding the influence of the icon on the screen display effect.
  • the embodiment of the present application also recognizes the target area where the object in the target image is located, and selects the target position at the center position of the target area or the position corresponding to the target feature.
  • the position is the position of the key part of the target object; at the same time, the salient position is determined according to the gray image corresponding to the target image, and the salient position is the position area in the target image that can attract more visual attention; finally, choose the largest distance from the reference position, or , The candidate position with the largest distance from the salient position, or the candidate position with the largest average distance from the reference position and salient position, is used as the icon position, so that the floating position of the icon is far away from the area where the key object in the target image is located, and the icon is not displayed on the screen The effect of the effect.
  • This solution fully considers the reference position and salient position of the target image, and determines the floating position of the icon without manual intervention, and the selected position is more accurate.
  • Fig. 4 is a block diagram showing a device for determining the position of an icon according to an exemplary embodiment.
  • the icon position determining device 400 includes:
  • the reference position determining module 401 is configured to perform detection of a target object in a target image, and determine a reference position of the target object in the target image;
  • the salient position detection module 402 is configured to detect salient positions in the target image
  • the position selection module 403 is configured to perform selection of icon positions from the candidate positions according to the reference position or the distance between the salient position and the preset candidate position.
  • Fig. 5 is a block diagram showing another device for determining the position of an icon according to an exemplary embodiment. 5, the icon position determining device 500 includes:
  • the reference position determining module 501 is configured to perform detection of a target object in a target image and determine a reference position of the target object in the target image;
  • the reference position determining module 501 includes:
  • the recognition sub-module 5011 is configured to perform the recognition of at least one object included in the target image and the corresponding target area;
  • the target object selection sub-module 5012 is configured to perform the selection of a target object from the at least one object according to a preset rule
  • the reference position determination submodule 5013 is configured to execute the target position of the target area as a reference position according to the target object, the target position including: the center position of the target area, the target area contains The location corresponding to the target feature.
  • the target object selection submodule 5012 includes: a first determining unit configured to perform determining the target object according to the position of the at least one object in the target image; or, the second determining unit is configured to Determining the target object according to the area proportion of the at least one object in the target image is executed.
  • the salient position detection module 502 is configured to detect salient positions in the target image
  • the salient position detection module 502 includes:
  • the gray-scale image acquisition sub-module 5021 is configured to execute the acquisition of the gray-scale image corresponding to the target image;
  • the salient area determination sub-module 5022 is configured to execute the determination of the salient area according to the gray-scale information of different areas included in the gray-scale image. Area;
  • the salient position determining sub-module 5023 is configured to perform determining the center point of the salient area to obtain a salient position.
  • the position selection module 503 is configured to select an icon position from the candidate positions according to the reference position or the distance between the salient position and the preset candidate position.
  • the position selection module 503 includes:
  • the first position selection sub-module 5031 is configured to perform that if the reference position exists but the salient position does not exist, then the candidate position with the largest distance from the reference position is selected to obtain the icon position; the second position selection sub-module 5032 , Configured to execute that if the salient position exists but the reference position does not exist, then the candidate position with the largest distance from the salient position is selected to obtain the icon position; the third position selection submodule 5033 is configured to execute If the reference position exists, and the salient position exists, the candidate position with the reference position and the salient position are both greater than the preset distance threshold and the candidate position with the largest average distance is selected to obtain the icon position, and the average distance is all The average value of the distance between the candidate position and the reference position and the salient position.
  • Fig. 6 is a block diagram showing an electronic device 600 for determining the position of an icon according to an exemplary embodiment.
  • the electronic device 600 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, and a sensor component 614 , And communication component 616.
  • a processing component 602 a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, and a sensor component 614 , And communication component 616.
  • the processing component 602 generally controls the overall operations of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 602 may include one or more processors 620 to execute instructions to implement the following processes:
  • the icon position is selected from the candidate positions according to the reference position or the distance between the salient position and the preset candidate position.
  • processor 620 is specifically configured to execute:
  • processor 620 is specifically configured to execute:
  • the target position of the target area is taken as the reference position, and the target position includes: the center position of the target area and the position corresponding to the target feature contained in the target area.
  • processor 620 is specifically configured to execute:
  • the target object is determined according to the area proportion of the at least one object in the target image.
  • processor 620 is specifically configured to execute:
  • the candidate position with the largest distance from the reference position is selected to obtain the icon position
  • the candidate position with the largest distance from the salient position is selected to obtain the icon position
  • the distance between the selected reference location and salient location is greater than the preset distance threshold, and the candidate location with the largest average distance is the icon position, and the average distance is the distance between the candidate location and the reference location, salient location The average value of the distance.
  • processing component 602 may include one or more modules to facilitate the interaction between the processing component 602 and other components.
  • processing component 602 may include a multimedia module to facilitate the interaction between the multimedia component 608 and the processing component 602.
  • the memory 604 is configured to perform storage of various types of data to support operations in the device 600. Examples of these data include instructions for any application or method operating on the electronic device 600, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 604 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 606 provides power for various components of the electronic device 600.
  • the power supply component 606 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 600.
  • the multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 608 includes a front camera and/or a rear camera. When the device 600 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 610 is configured to perform output and/or input of audio signals.
  • the audio component 610 includes a microphone (MIC), and when the electronic device 600 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal can be further stored in the memory 604 or sent via the communication component 616.
  • the audio component 610 further includes a speaker for outputting audio signals.
  • the I/O interface 612 provides an interface between the processing component 602 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 614 includes one or more sensors for providing the electronic device 600 with various aspects of state evaluation.
  • the sensor component 614 can detect the on/off status of the device 600 and the relative positioning of components.
  • the component is the display and the keypad of the electronic device 600.
  • the sensor component 614 can also detect the electronic device 600 or a component of the electronic device 600.
  • the position of the electronic device 600 changes, the presence or absence of contact between the user and the electronic device 600, the orientation or acceleration/deceleration of the electronic device 600 and the temperature change of the electronic device 600.
  • the sensor component 614 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 616 is configured to perform wired or wireless communication between the electronic device 600 and other devices.
  • the electronic device 600 can access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 6G), or a combination thereof.
  • the communication component 616 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 616 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 600 may be used by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field A programmable gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • non-transitory computer-readable storage medium including instructions, such as the memory 604 including instructions, which may be executed by the processor 620 of the electronic device 600 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • Fig. 7 is a block diagram showing an electronic device 700 for determining the position of an icon according to an exemplary embodiment.
  • the electronic device 700 may be provided as a server.
  • the electronic device 700 includes a processing component 722, which further includes one or more processors, and a memory resource represented by a memory 732, for storing instructions that can be executed by the processing component 722, such as an application program.
  • the application program stored in the memory 732 may include one or more modules each corresponding to a set of instructions.
  • the processing component 722 is configured to execute instructions to execute the above-mentioned audio playing method and audio data sending method.
  • the electronic device 700 may also include a power supply component 727 configured to perform power management of the electronic device 700, a wired or wireless network interface 750 configured to connect the electronic device 700 to a network, and an input output (I/O) interface 757.
  • the electronic device 700 can operate based on an operating system stored in the memory 732, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a computer program product includes readable program code, which can be executed by the processing component 722 of the electronic device 700 to complete the foregoing method.
  • the program code may be stored in a storage medium of the electronic device 700, and the storage medium may be a non-transitory computer-readable storage medium.
  • the non-transitory computer-readable storage medium may be a ROM, a random access Memory (RAM), CD-ROM, magnetic tape, floppy disk and optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请是关于一种图标位置确定方法和装置。其中,所述图标位置确定方法包括:检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置,检测所述目标图像中的显著位置,从而可以得到目标图像中较关键的对象或物体所在的参考位置,以及目标图像中可能被投入更多关注的显著位置,最终根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置,从而可以避免图标遮挡所述参考位置和所述显著位置,该方法完全不需要人工参与,效率较高。

Description

图标位置确定方法和装置
相关申请的交叉引用
本申请要求在2019年05月22日提交中国专利局、申请号为201910430924.8、申请名称为“图标位置确定方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图标位置确定方法和装置。
背景技术
目前,图片或视频的标题文字、水印等图标一般是放置在图片或视频画面的中间位置或画面上部、下部等位置,发明人发现在很多时候,这些图标可能会对画面中的关键物体产生遮挡,影响画面的展示效果,对观看画面的用户产生干扰。
针对该问题现有的解决方案是依靠用户手动调整图标放置位置,发明人认为该方法比较耗费用户时间,效率较低。
发明内容
为克服相关技术中存在的问题,本申请提供一种图标位置确定方法和装置。
根据本申请实施例的第一方面,提供一种图标位置确定方法,包括:检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置;检测所述目标图像中的显著位置;根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
根据本申请实施例的第二方面,提供一种图标位置确定装置,包括:参考位置确定模块,被配置为执行检测目标图像中的目标对象,并确定所述目 标对象在所述目标图像中的参考位置;显著位置检测模块,被配置为执行检测所述目标图像中的显著位置;位置选取模块,被配置为执行根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
根据本申请实施例的第三方面,提供一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如第一方面所述的图标位置确定方法。
根据本申请实施例的第四方面,提供一种可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如第一方面所述的图标位置确定方法。
根据本申请实施例的第五方面,提供一种计算机程序产品,当所述计算机程序产品中的指令由电子设备的处理器执行时,使得电子设备能够执行如第一方面所述的图标位置确定方法。
本申请的实施例提供的技术方案至少带来以下有益效果:
在本申请实施例中,通过检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置,检测所述目标图像中的显著位置,从而可以得到目标图像中较关键的对象或物体所在的参考位置,以及目标图像中可能被投入更多关注的显著位置,最终根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置,从而可以避免图标遮挡所述参考位置和所述显著位置,该方法完全不需要人工参与,效率较高。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理,并不构成对本申请的 不当限定。
图1是根据一示例性实施例示出的一种图标位置确定方法的流程图;
图2是根据一示例性实施例示出的另一种图标位置确定方法的流程图;
图3是根据一示例性实施例示出的一种在目标图像中确定图标位置的示意图;
图4是根据一示例性实施例示出的一种图标位置确定装置的框图;
图5是根据一示例性实施例示出的另一种图标位置确定装置的框图;
图6是根据一示例性实施例示出的一种电子设备的框图(移动终端的一般结构);
图7是根据一示例性实施例示出的一种电子设备的框图(服务器的一般结构)。
具体实施方式
为了使本领域普通人员更好地理解本申请的技术方案,下面将结合附图,对本申请实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种图标位置确定方法的流程图,如图1所示,该图标位置确定方法包括以下步骤。
在步骤S101中,检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置。
在本申请实施例中,目标图像是指待添加图标的图像,该目标图像可以 包括视频图像、静态图像等,待添加在目标图像上的图标可以包括标题文字、悬浮图片、水印等不透明或半透明的标志。目标图像中一般会包括目标对象,该目标对象可以是图像中包括的任何物体,例如人、动物、物品、植物等。具体地,可以采用快速卷积神经网络特征区域提取(Faster Regions with Convolutional Neural Networks features,Faster-RCNN)算法、单发射击多框预测(Single Shot MultiBox Detector,SSD)算法等来检测目标图像中的目标对象,本申请实施例对检测方法不做具体限定。检测完成后,根据检测结果一般会输出零个、一个或多个矩形框,用来框出目标对象。选取该矩形框的中心位置,或矩形框中物体的某个特征对应的位置,将其作为目标对象的参考位置。
在步骤S102中,检测所述目标图像中的显著位置。
在本申请实施例中,可以对目标图像进行显著性检测。显著性检测,是一种通过对图像颜色、强度、方向等特征进行分析,计算图像显著性,生成图像显著性图的技术。图像的显著点是指,图像中的像素点(或区域)能够区别于其他像素点(或区域)吸引视觉注意的能力。例如,一张白纸上有一个黑点,则黑点的显著性较高,其他地方则较低。图像的显著性图是一幅和原始图像大小相同的二维图像,其中每个像素值表示原图像对应点的显著性大小。显著性图可以用于引导注意区域的选择,快速定位图像的显著性区域。可以采用最小障碍突出检测(Minimum barrier salient,MBS)算法对目标图像进行显著性检测。
在步骤S103中,根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
在本申请实施例中,预设候选位置为事先确定的图标在目标图像上的悬浮位置,候选位置一般为多个。例如,该候选位置可以包括目标图像正中心、中心偏上、中心偏下、中心偏左、中心偏右、左上角、右上角、左下角、右下角这些位置。计算步骤S101中得到的每个参考位置与候选位置之间的欧式距离,计算步骤S102中得到的显著位置与候选位置之间的欧式距离,根据这 些距离,选择一个或多个候选位置作为图标位置。
具体地,为了避免图标对画面中的关键物体产生遮挡,可以选择欧式距离最大的候选位置作为图标放置位置。
在本申请实施例中,通过检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置,检测所述目标图像中的显著位置,从而可以得到目标图像中较关键的对象或物体所在的参考位置,以及目标图像中可能被投入更多关注的显著位置,最终根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置,从而可以避免图标遮挡所述参考位置和所述显著位置,该方法完全不需要人工参与,效率较高。
图2是根据一示例性实施例示出的另一种图标位置确定方法的流程图,其是图1中的图标位置确定方法的可选实施例,如图2所示,该图标位置确定方法包括以下步骤。
在步骤S201中,识别目标图像中包括的至少一个对象,以及对应的目标区域。
在本申请实施例中,首先对目标图像中包括的对象进行识别。例如,对目标图像进行人脸检测、猫狗检测、物体检测、甚至更为精细的人脸特征点检测等,根据检测结果一般会输出零个、一个或多个矩形框,该矩形框即为目标区域,用来框出目标对象。
图3是根据一示例性实施例示出的一种在目标图像中确定图标位置的示意图。在图3中,包括小男孩01、小女孩02以及跷跷板03。经过图像检测,识别出目标图像中的三个对象:小男孩01、小女孩02以及跷跷板03,并输出对应的三个目标区域,分别是小男孩01的脸部对应的目标区域S1,小女孩02的脸部对应的目标区域S2以及跷跷板03的支点对应的目标区域S3。
在步骤S202中,按照预设规则从所述至少一个对象中选取目标对象。
在本申请实施例中,从目标图像识别出的至少一个对象中,按照预设规则选取其中最关键的对象。例如可以是选出所有对象中最大的对象,或,离 得最近的几个对象中的其中一个,或,最小的对象等。
可选地,步骤S202包括以下步骤A1或步骤A2:
步骤A1:根据所述至少一个对象在所述目标图像中的位置确定目标对象。
具体地,确定每个对象在目标图像中的位置区域,根据位置区域确定目标对象。例如,将位置区域分为正中心、中心偏上、中心偏下、中心偏左、中心偏右、左上角、右上角、左下角、右下角等,然后将位于某一个位置区域中的对象确定为目标对象。例如,在图3中,可以将位于中心偏上位置的小男孩01确定为目标对象。
步骤A2:根据所述至少一个对象在所述目标图像中的面积占比确定目标对象。
具体地,可以根据每个对象对应的目标区域的面积占比确定目标对象,例如,可以将面积占比最大的对象,或,面积占比最小的对象确定为目标对象。例如,在图3中,可以将目标区域的面积占比最大的跷跷板03作为目标对象。
在步骤S203中,根据所述目标对象,将所述目标区域的目标位置作为参考位置,所述目标位置包括:所述目标区域的中心位置、所述目标区域包含的目标特征对应的位置。
在本申请实施例中,在确定目标对象后,可以按照预设规则从目标对象对应的目标区域中选取目标位置作为参考位置。预设规则可以是选取目标区域的中心位置作为参考位置,也可以是选取目标区域的其他特定位置作为参考位置。或者,预设规则可以是选取目标区域包含的目标特征对应的位置,例如,选取目标区域中的人眼对应的位置,或,人的鼻子对应的位置等。其中,目标特征包括的具体对象,本申请实施例不做具体限定,本领域技术人员可以根据需求进行预设。
例如,在图3中,确定目标对象是小男孩01后,针对小男孩01对应的目标区域S1,选取S1中人的鼻子对应的位置S11作为参考位置。
在本申请实施例中,识别目标图像中的对象所在的目标区域,并在目标 区域的中心位置或目标特征对应的位置选取目标位置,该目标位置即是目标对象的关键部位所在位置,对于后续的图标位置选取,可以避开上述目标位置。
在步骤S204中,获取目标图像对应的灰度图像。
在本申请实施例中,若目标图像的参考位置不存在,则可以获取目标图像的显著位置。获取显著位置,首先可以获取目标图像对应的灰度图像。灰度图像是每个像素只有一个采样颜色的图像。这类图像通常显示为从最暗黑色到最亮的白色的灰度,灰度图像与黑白图像不同,灰度图像在黑色与白色之间还有许多级的颜色深度。可以在单个电磁波频谱,如可见光内测量目标图像中每个像素的亮度得到灰度图像。
获取目标图像对应的灰度图像的目的是为了对目标图像进行显著性检测,但一般认为图像中的目标对象相对于图像中的显著性区域更加重要,所以如果检测到了参考位置,就不用再做显著性检测,即不用再执行本申请实施例中的步骤S204-步骤S206了。
在步骤S205中,根据所述灰度图像中包括的不同区域的灰度信息,确定显著区域。
灰度图像中包括了多个灰度区域,每个区域都有对应的灰度信息,该灰度信息表示该区域的颜色深度,根据每个区域对应的灰度信息可以确定灰度图像中的显著区域。图像的显著点是指,图像中的像素点(或区域)能够区别于其他像素点(或区域)吸引视觉注意的能力。因此,可以将灰度图像中与其他区域的灰度值差别最大的区域确定为显著区域。例如,若将图3转化为灰度图像后,小女孩02的右眼睛所在区域S21的灰度值与灰度图像中其他区域的灰度值差别最大,则将区域S21确定为显著区域。
在步骤S206中,确定所述显著区域的中心点得到显著位置。
显著区域确定后,从显著区域中选取中心位置的点,将该中心位置的点作为显著位置。
例如,在图3中,可以将区域S21的中心点S22确定为显著位置。
因为显著位置为图像中能够更吸引视觉注意的位置区域,后续的图标位置选取,可以避开该显著位置,以避免图标对目标图像中用户视线关注区域的影响。
在步骤S207中,若所述参考位置存在,但所述显著位置不存在,则选取与所述参考位置距离最大的候选位置得到图标位置。
在本申请实施例中,若检测到目标图像中的目标对象,确定出了目标对象的参考位置,但是,由于目标图像的灰度信息差别不大,该目标图像的显著位置不存在,则此时选取与所述参考位置距离最远的候选位置,作为图标的悬浮位置。其中,候选位置可以包括目标图像正中心、中心偏上、中心偏下、中心偏左、中心偏右、左上角、右上角、左下角、右下角这些位置。上述距离可以是欧式距离,即参考位置的中心点与候选位置的中心点,该两点之间的距离。
例如,在图3中,若显著位置不存在,小男孩S1的鼻子对应的位置S11是参考位置,则计算该参考位置的中心点S12与每个候选位置的中心点之间的欧式距离,通过计算,S12与右下角04这个候选位置的欧式距离最大,则选取右下角04这个候选位置作为图标位置。
在步骤S208中,若所述显著位置存在,但所述参考位置不存在,则选取与所述显著位置距离最大的候选位置得到图标位置。
在本申请实施例中,若目标图像中无法检测到对象,则说明目标图像不具有参考位置,但该目标图像对应的灰度图像具有显著位置,则选取与显著位置距离最大的候选位置作为图标位置。具体地,计算显著位置的中心点与每个候选位置的中心点两点之间的距离。
例如,在图3中,若参考位置不存在,小女孩S2的眼睛对应的位置S21的中心点S22是显著位置,则计算该点S22与每个候选位置的中心点之间的欧式距离,通过计算,S22与左下角05这个候选位置的欧式距离最大,则选取左下角05这个候选位置作为图标位置。
在步骤S209中,若所述参考位置存在,且所述显著位置存在,则选取与 所述参考位置、所述显著位置的距离均大于预设距离阈值,且平均距离最大的候选位置得到图标位置,所述平均距离为所述候选位置与所述参考位置、所述显著位置的距离的平均值。
在本申请实施例中,若参考位置存在且显著位置也存在,则可以分别计算参考位置与每个候选位置的第一距离,显著位置与每个候选位置的第二距离,选取第一距离和第二距离均大于预设的距离阈值的候选位置,得到至少一个候选位置,针对所述至少一个候选位置,分别计算第一距离与第二距离的平均值,得到至少一个平均距离,从中取平均距离最大的候选位置作为图标位置。
此外,若参考位置存在且显著位置也存在,也可以根据实际应用选取其中一种确定图标位置。例如,选取与所述参考位置距离最大的候选位置得到图标位置,或,选取与所述显著位置距离最大的候选位置得到图标位置。
在上述步骤208-步骤209中,选取了与参考位置距离最大,或,与显著位置距离最大,或,与参考位置和显著位置的平均距离最大的候选位置,作为图标位置,从而使得图标的悬浮位置远离了目标图像中关键物体所在的区域,避免了图标对画面展示效果的影响。
本申请实施例除具有图1中的图标位置确定方法的有益效果外,还识别目标图像中的对象所在的目标区域,并在目标区域的中心位置或目标特征对应的位置选取目标位置,该目标位置即是目标对象的关键部位所在位置;同时,根据目标图像对应的灰度图像确定显著位置,显著位置为目标图像中能够更吸引视觉注意的位置区域;最终,选取与参考位置距离最大,或,与显著位置距离最大,或,与参考位置和显著位置的平均距离最大的候选位置,作为图标位置,从而使得图标的悬浮位置远离了目标图像中关键物体所在的区域,避免了图标对画面展示效果的影响。本方案在充分考虑目标图像的参考位置与显著位置的情况下,确定图标的悬浮位置,不需要人工参与,且选取的位置更精准。
图4是根据一示例性实施例示出的一种图标位置确定装置的框图。参照 图4,该图标位置确定装置400,包括:
参考位置确定模块401,被配置为执行检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置;
显著位置检测模块402,被配置为执行检测所述目标图像中的显著位置;
位置选取模块403,被配置为执行根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
关于上述实施例中的图标确定装置400,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图5是根据一示例性实施例示出的另一种图标位置确定装置的框图。参照图5,该图标位置确定装置500包括:
参考位置确定模块501,被配置为执行检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置;
其中,该参考位置确定模块501包括:
识别子模块5011,被配置为执行识别目标图像中包括的至少一个对象,以及对应的目标区域;目标对象选取子模块5012,被配置为执行按照预设规则从所述至少一个对象中选取目标对象;参考位置确定子模块5013,被配置为执行根据所述目标对象,将所述目标区域的目标位置作为参考位置,所述目标位置包括:所述目标区域的中心位置、所述目标区域包含的目标特征对应的位置。
其中,所述目标对象选取子模块5012,包括:第一确定单元,被配置为执行根据所述至少一个对象在所述目标图像中的位置确定目标对象;或,第二确定单元,被配置为执行根据所述至少一个对象在所述目标图像中的面积占比确定目标对象。
显著位置检测模块502,被配置为执行检测所述目标图像中的显著位置;
其中,该显著位置检测模块502包括:
灰度图像获取子模块5021,被配置为执行获取目标图像对应的灰度图像;显著区域确定子模块5022,被配置为执行根据所述灰度图像中包括的不同区 域的灰度信息,确定显著区域;显著位置确定子模块5023,被配置为执行确定所述显著区域的中心点得到显著位置。
位置选取模块503,被配置为执行根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
其中,该位置选取模块503包括:
第一位置选取子模块5031,被配置为执行若所述参考位置存在,但所述显著位置不存在,则选取与所述参考位置距离最大的候选位置得到图标位置;第二位置选取子模块5032,被配置为执行若所述显著位置存在,但所述参考位置不存在,则选取与所述显著位置距离最大的候选位置得到图标位置;第三位置选取子模块5033,被配置为执行若所述参考位置存在,且所述显著位置存在,则选取与所述参考位置、所述显著位置的距离均大于预设距离阈值,且平均距离最大的候选位置得到图标位置,所述平均距离为所述候选位置与所述参考位置、所述显著位置的距离的平均值。
关于上述实施例中的图标确定装置500,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图6是根据一示例性实施例示出的一种用于图标位置确定的电子设备600的框图。例如,电子设备600可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图6,电子设备600可以包括以下一个或多个组件:处理组件602,存储器604,电源组件606,多媒体组件608,音频组件610,输入/输出(I/O)的接口612,传感器组件614,以及通信组件616。
处理组件602通常控制电子设备600的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件602可以包括一个或多个处理器620来执行指令,以实现下列过程:
检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置;
检测所述目标图像中的显著位置;
根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
可选的,处理器620具体被配置为执行:
获取目标图像对应的灰度图像;
根据灰度图像中包括的不同区域的灰度信息,确定显著区域;
确定显著区域的中心点得到显著位置。
可选的,处理器620具体被配置为执行:
识别目标图像中包括的至少一个对象,以及对应的目标区域;
按照预设规则从至少一个对象中选取目标对象;
根据目标对象,将目标区域的目标位置作为参考位置,目标位置包括:目标区域的中心位置、目标区域包含的目标特征对应的位置。
可选的,处理器620具体被配置为执行:
根据至少一个对象在目标图像中的位置确定目标对象;
或,根据至少一个对象在目标图像中的面积占比确定目标对象。
可选的,处理器620具体被配置为执行:
若参考位置存在,但显著位置不存在,则选取与参考位置距离最大的候选位置得到图标位置;
若显著位置存在,但参考位置不存在,则选取与显著位置距离最大的候选位置得到图标位置;
若参考位置存在,且显著位置存在,则选取与参考位置、显著位置的距离均大于预设距离阈值,且平均距离最大的候选位置得到图标位置,平均距离为候选位置与参考位置、显著位置的距离的平均值。
此外,处理组件602可以包括一个或多个模块,便于处理组件602和其他组件之间的交互。例如,处理组件602可以包括多媒体模块,以方便多媒体组件608和处理组件602之间的交互。
存储器604被配置为执行存储各种类型的数据以支持在设备600的操作。 这些数据的示例包括用于在电子设备600上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件606为电子设备600的各种组件提供电力。电源组件606可以包括电源管理系统,一个或多个电源,及其他与为电子设备600生成、管理和分配电力相关联的组件。
多媒体组件608包括在所述电子设备600和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件608包括一个前置摄像头和/或后置摄像头。当设备600处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件610被配置为执行输出和/或输入音频信号。例如,音频组件610包括一个麦克风(MIC),当电子设备600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为执行接收外部音频信号。所接收的音频信号可以被进一步存储在存储器604或经由通信组件616发送。在一些实施例中,音频组件610还包括一个扬声器,用于输出音频信号。
I/O接口612为处理组件602和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件614包括一个或多个传感器,用于为电子设备600提供各个方面的状态评估。例如,传感器组件614可以检测到设备600的打开/关闭状态,组件的相对定位,例如所述组件为电子设备600的显示器和小键盘,传感器组件614还可以检测电子设备600或电子设备600一个组件的位置改变,用户与电子设备600接触的存在或不存在,电子设备600方位或加速/减速和电子设备600的温度变化。传感器组件614可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件614还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件614还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件616被配置为执行便于电子设备600和其他设备之间有线或无线方式的通信。电子设备600可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或6G),或它们的组合。在一个示例性实施例中,通信组件616经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器604,上述指令可由电子设备600的处理器620执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
图7是根据一示例性实施例示出的一种用于图标位置确定的电子设备700 的框图。例如,电子设备700可以被提供为一服务器。参照图7,电子设备700包括处理组件722,其进一步包括一个或多个处理器,以及由存储器732所代表的存储器资源,用于存储可由处理组件722的执行的指令,例如应用程序。存储器732中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件722被配置为执行指令,以执行上述音频播放方法和音频数据发送方法。
电子设备700还可以包括一个电源组件727被配置为执行电子设备700的电源管理,一个有线或无线网络接口750被配置为执行将电子设备700连接到网络,和一个输入输出(I/O)接口757。电子设备700可以操作基于存储在存储器732的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种计算机程序产品,该计算机程序产品包括可读性程序代码,该可读性程序代码可由电子设备700的处理组件722执行以完成上述方法。可选地,该程序代码可以存储在电子设备700的存储介质中,该存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (12)

  1. 一种图标位置确定方法,包括:
    检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置;
    检测所述目标图像中的显著位置;
    根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
  2. 根据权利要求1所述的方法,所述检测所述目标图像中的显著位置包括:
    获取目标图像对应的灰度图像;
    根据所述灰度图像中包括的不同区域的灰度信息,确定显著区域;
    确定所述显著区域的中心点得到显著位置。
  3. 根据权利要求1所述的方法,所述检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置,包括:
    识别目标图像中包括的至少一个对象,以及对应的目标区域;
    按照预设规则从所述至少一个对象中选取目标对象;
    根据所述目标对象,将所述目标区域的目标位置作为参考位置,所述目标位置包括:所述目标区域的中心位置、所述目标区域包含的目标特征对应的位置。
  4. 根据权利要求3所述的方法,所述按照预设规则从所述至少一个对象中选取目标对象,包括:
    根据所述至少一个对象在所述目标图像中的位置确定目标对象;
    或,根据所述至少一个对象在所述目标图像中的面积占比确定目标对象。
  5. 根据权利要求1至4其中任一项所述的方法,所述根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置,包括:
    若所述参考位置存在,但所述显著位置不存在,则选取与所述参考位置距离最大的候选位置得到图标位置;
    若所述显著位置存在,但所述参考位置不存在,则选取与所述显著位置距离最大的候选位置得到图标位置;
    若所述参考位置存在,且所述显著位置存在,则选取与所述参考位置、所述显著位置的距离均大于预设距离阈值,且平均距离最大的候选位置得到图标位置,所述平均距离为所述候选位置与所述参考位置、所述显著位置的距离的平均值。
  6. 一种图标位置确定装置,包括:
    参考位置确定模块,被配置为执行检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置;
    显著位置检测模块,被配置为执行检测所述目标图像中的显著位置;
    位置选取模块,被配置为执行根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
  7. 一种电子设备,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行:
    检测目标图像中的目标对象,并确定所述目标对象在所述目标图像中的参考位置;
    检测所述目标图像中的显著位置;
    根据所述参考位置或所述显著位置与预设候选位置之间的距离从所述候选位置中选取图标位置。
  8. 根据权利要求7所述的电子设备,所述处理器具体被配置为执行:
    获取目标图像对应的灰度图像;
    根据所述灰度图像中包括的不同区域的灰度信息,确定显著区域;
    确定所述显著区域的中心点得到显著位置。
  9. 根据权利要求7所述的电子设备,所述处理器具体被配置为执行:
    识别目标图像中包括的至少一个对象,以及对应的目标区域;
    按照预设规则从所述至少一个对象中选取目标对象;
    根据所述目标对象,将所述目标区域的目标位置作为参考位置,所述目标位置包括:所述目标区域的中心位置、所述目标区域包含的目标特征对应的位置。
  10. 根据权利要求9所述的电子设备,所述处理器具体被配置为执行:
    根据所述至少一个对象在所述目标图像中的位置确定目标对象;
    或,根据所述至少一个对象在所述目标图像中的面积占比确定目标对象。
  11. 根据权利要求7至10其中任一项所述的电子设备,所述处理器具体被配置为执行:
    若所述参考位置存在,但所述显著位置不存在,则选取与所述参考位置距离最大的候选位置得到图标位置;
    若所述显著位置存在,但所述参考位置不存在,则选取与所述显著位置距离最大的候选位置得到图标位置;
    若所述参考位置存在,且所述显著位置存在,则选取与所述参考位置、所述显著位置的距离均大于预设距离阈值,且平均距离最大的候选位置得到图标位置,所述平均距离为所述候选位置与所述参考位置、所述显著位置的距离的平均值。
  12. 一种可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如权利要求1至5中任一项所述的图标位置确定方法。
PCT/CN2020/078679 2019-05-22 2020-03-10 图标位置确定方法和装置 WO2020233201A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20809629.7A EP3974953A4 (en) 2019-05-22 2020-03-10 ICON POSITION DETERMINATION METHOD AND DEVICE
US17/532,349 US11574415B2 (en) 2019-05-22 2021-11-22 Method and apparatus for determining an icon position

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910430924.8 2019-05-22
CN201910430924.8A CN110286813B (zh) 2019-05-22 2019-05-22 图标位置确定方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/532,349 Continuation US11574415B2 (en) 2019-05-22 2021-11-22 Method and apparatus for determining an icon position

Publications (1)

Publication Number Publication Date
WO2020233201A1 true WO2020233201A1 (zh) 2020-11-26

Family

ID=68002720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078679 WO2020233201A1 (zh) 2019-05-22 2020-03-10 图标位置确定方法和装置

Country Status (4)

Country Link
US (1) US11574415B2 (zh)
EP (1) EP3974953A4 (zh)
CN (1) CN110286813B (zh)
WO (1) WO2020233201A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110286813B (zh) 2019-05-22 2020-12-01 北京达佳互联信息技术有限公司 图标位置确定方法和装置
US11941341B2 (en) * 2022-02-28 2024-03-26 Apple Inc. Intelligent inset window placement in content
CN114283910B (zh) * 2022-03-04 2022-06-24 广州科犁医学研究有限公司 基于多渠道信息的临床数据采集分析系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108008878A (zh) * 2017-12-04 2018-05-08 北京麒麟合盛网络技术有限公司 应用图标设置方法、装置及移动终端
CN109445653A (zh) * 2018-09-28 2019-03-08 维沃移动通信有限公司 一种图标处理方法及移动终端
CN109471571A (zh) * 2018-10-23 2019-03-15 努比亚技术有限公司 悬浮控件的显示方法、移动终端及计算机可读存储介质
CN110286813A (zh) * 2019-05-22 2019-09-27 北京达佳互联信息技术有限公司 图标位置确定方法和装置

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7107285B2 (en) * 2002-03-16 2006-09-12 Questerra Corporation Method, system, and program for an improved enterprise spatial system
CN1256705C (zh) * 2003-07-28 2006-05-17 西安电子科技大学 基于图像目标区域的小波域数字水印方法
US8988609B2 (en) * 2007-03-22 2015-03-24 Sony Computer Entertainment America Llc Scheme for determining the locations and timing of advertisements and other insertions in media
CA2651464C (en) * 2008-04-30 2017-10-24 Crim (Centre De Recherche Informatique De Montreal) Method and apparatus for caption production
US8359541B1 (en) * 2009-09-18 2013-01-22 Sprint Communications Company L.P. Distributing icons so that they do not overlap certain screen areas of a mobile device
JP5465620B2 (ja) * 2010-06-25 2014-04-09 Kddi株式会社 映像コンテンツに重畳する付加情報の領域を決定する映像出力装置、プログラム及び方法
CN102830890B (zh) * 2011-06-13 2015-09-09 阿里巴巴集团控股有限公司 一种显示图标的方法和装置
US8943426B2 (en) * 2011-11-03 2015-01-27 Htc Corporation Method for displaying background wallpaper and one or more user interface elements on display unit of electrical apparatus at the same time, computer program product for the method and electrical apparatus implementing the method
CN103730125B (zh) 2012-10-12 2016-12-21 华为技术有限公司 一种回声抵消方法和设备
US20140105450A1 (en) * 2012-10-17 2014-04-17 Robert Berkeley System and method for targeting and reading coded content
CN102890814B (zh) * 2012-11-06 2014-12-10 中国科学院自动化研究所 水印的嵌入和提取方法
US9467750B2 (en) * 2013-05-31 2016-10-11 Adobe Systems Incorporated Placing unobtrusive overlays in video content
WO2015100594A1 (zh) * 2013-12-31 2015-07-09 宇龙计算机通信科技(深圳)有限公司 显示方法和终端
KR102266882B1 (ko) * 2014-08-07 2021-06-18 삼성전자 주식회사 전자장치의 화면 표시 방법
KR102269598B1 (ko) * 2014-12-08 2021-06-25 삼성전자주식회사 배경화면의 내용에 대응하여 객체를 배열하는 방법 및 장치
CN107885565B (zh) * 2017-10-31 2019-02-19 平安科技(深圳)有限公司 金融app界面的水印嵌入方法、装置、设备及存储介质
CN108391078B (zh) * 2018-02-26 2020-10-27 苏州科达科技股份有限公司 视频中水印嵌入位置的确定方法、系统、设备及存储介质
CN108596820B (zh) * 2018-04-11 2022-04-05 重庆第二师范学院 一种基于信息安全的图像处理系统
CN108550101B (zh) * 2018-04-19 2023-07-25 腾讯科技(深圳)有限公司 图像处理方法、装置及存储介质
CN110620946B (zh) * 2018-06-20 2022-03-18 阿里巴巴(中国)有限公司 字幕显示方法及装置
CN110456960B (zh) * 2019-05-09 2021-10-01 华为技术有限公司 图像处理方法、装置及设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108008878A (zh) * 2017-12-04 2018-05-08 北京麒麟合盛网络技术有限公司 应用图标设置方法、装置及移动终端
CN109445653A (zh) * 2018-09-28 2019-03-08 维沃移动通信有限公司 一种图标处理方法及移动终端
CN109471571A (zh) * 2018-10-23 2019-03-15 努比亚技术有限公司 悬浮控件的显示方法、移动终端及计算机可读存储介质
CN110286813A (zh) * 2019-05-22 2019-09-27 北京达佳互联信息技术有限公司 图标位置确定方法和装置

Also Published As

Publication number Publication date
EP3974953A1 (en) 2022-03-30
CN110286813B (zh) 2020-12-01
US11574415B2 (en) 2023-02-07
US20220084237A1 (en) 2022-03-17
EP3974953A4 (en) 2022-07-13
CN110286813A (zh) 2019-09-27

Similar Documents

Publication Publication Date Title
US9674395B2 (en) Methods and apparatuses for generating photograph
EP3163498B1 (en) Alarming method and device
JP6392468B2 (ja) 領域認識方法及び装置
US10007841B2 (en) Human face recognition method, apparatus and terminal
US10284773B2 (en) Method and apparatus for preventing photograph from being shielded
CN104918107B (zh) 视频文件的标识处理方法及装置
WO2022134382A1 (zh) 图像分割方法及装置、电子设备和存储介质、计算机程序
CN105631797B (zh) 水印添加方法及装置
WO2021051949A1 (zh) 一种图像处理方法及装置、电子设备和存储介质
WO2020233201A1 (zh) 图标位置确定方法和装置
CN110619350B (zh) 图像检测方法、装置及存储介质
US10650502B2 (en) Image processing method and apparatus, and storage medium
CN105631803B (zh) 滤镜处理的方法和装置
CN107944367B (zh) 人脸关键点检测方法及装置
CN108154466B (zh) 图像处理方法及装置
CN108122195B (zh) 图片处理方法及装置
WO2021057359A1 (zh) 图像处理方法、电子设备及可读存储介质
CN112866801A (zh) 视频封面的确定方法、装置、电子设备及存储介质
CN107219989B (zh) 图标处理方法、装置及终端
CN107292901B (zh) 边缘检测方法及装置
CN106469446B (zh) 深度图像的分割方法和分割装置
WO2023273050A1 (zh) 活体检测方法及装置、电子设备和存储介质
CN106201238B (zh) 展示对焦状态的方法及装置
WO2023231009A1 (zh) 一种对焦方法、装置及存储介质
CN115866396A (zh) 图像的聚焦方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20809629

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020809629

Country of ref document: EP

Effective date: 20211222