EP4343520A1 - Procédé d'obtention de caractéristique d'image et dispositif électronique - Google Patents

Procédé d'obtention de caractéristique d'image et dispositif électronique Download PDF

Info

Publication number
EP4343520A1
EP4343520A1 EP22827120.1A EP22827120A EP4343520A1 EP 4343520 A1 EP4343520 A1 EP 4343520A1 EP 22827120 A EP22827120 A EP 22827120A EP 4343520 A1 EP4343520 A1 EP 4343520A1
Authority
EP
European Patent Office
Prior art keywords
image
feature
obtaining
display
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22827120.1A
Other languages
German (de)
English (en)
Inventor
Yaming Wang
Jinfei Wang
Yingwen WANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP4343520A1 publication Critical patent/EP4343520A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • This application relates to the field of terminals, and in particular, to an image feature obtaining method and an electronic device.
  • image-text editing functions for example, handwriting or drawing
  • a user can write texts or draw images on screens of electronic devices, so that the user can record information or create works of art.
  • the user may need brushes of specific colors, or need to draw specific textures.
  • the electronic device needs to provide image features, for example, the color or the texture, for the user.
  • the electronic device may provide a preset image feature library for the user, and the image feature library may include a plurality of preset image features.
  • the electronic device determines that the image feature is selected by the user, and then may perform subsequent operations, for example, image-text editing, based on the image feature selected by the user.
  • the image features included in the preset image feature library are usually very limited. Take a color feature as an example.
  • the electronic device may provide the user with only several common options for the color feature, which may however change infinitely. Therefore, it is clear that the conventional technology has a large limitation on providing the image features, and cannot meet user requirements.
  • this application provides an image feature obtaining method and an electronic device, to improve flexibility and diversity of obtained image features.
  • an embodiment of this application provides an image feature obtaining method, including:
  • a first device receives a first obtaining instruction, where the first obtaining instruction indicates the first device to obtain an image feature; and the first device obtains a first feature in response to the first obtaining instruction.
  • the first feature is a feature of a first image of a target device, the target device is the first device or a second device associated with the first device, and the first image is at least a partial image in a picture currently displayed on a display of the target device.
  • the image feature is a visual feature of an image, and the image feature may be used to edit a text or an image.
  • the text or the image may be enabled to have the image feature.
  • the first device is associated with the second device may mean that the first device is being connected to or can be connected to the second device through communication.
  • the first device and the second device may be devices currently connected by using a short-range communication technology.
  • the first device and the second device may be devices corresponding to a same user identifier.
  • the first device may be a tablet computer of a user A
  • the second device may be a mobile phone of the user A.
  • the first device may obtain the first feature.
  • the first feature is the feature of the first image of the target device
  • the target device may be the first device, or may be the second device associated with the first device
  • the first image may be at least a partial image of the picture currently displayed on the display of the target device.
  • Content of the picture comes from a wide range of sources, and may be an interface of an application in the target device, or may be a superposition of interfaces of a plurality of applications in the target device.
  • the picture may be a frame of picture in a video that is being played, or may be a list of a plurality of photos included in the album. Therefore, the first image is not limited by an application or the first device, and the first features that may be included in the first image are very flexible and diversified. This greatly improves flexibility and diversity of obtained image features, and fully meets a user requirement.
  • the first feature may be a feature of a color type or a feature of a texture type.
  • the target device is the first device. That the first device obtains a first feature in response to the first obtaining instruction includes:
  • the first device obtains, based on a first screen capture operation, the first image from the picture currently displayed on the display of the first device; and the first device extracts the first feature from the first image.
  • the first image is at least a partial image of the picture currently displayed on the display of the first device, the picture is not limited by an application, and correspondingly, the first image is not limited by an application in the first device. Therefore, the first feature can be obtained from a source outside a preset image feature library, for example, an area outside an interface of an image-text editing application. Therefore, flexibility and diversity of obtained image features are improved, and a user requirement can be fully met. In addition, in comparison with a manner in which a user uploads the image feature to the first device from outside the first device, an operation is simpler.
  • the method further includes:
  • the first device creates a first window according to the first obtaining instruction, where a size of the first window is the same as a size of the display of the first device, and the first window is a transparent window located over another window displayed on the display.
  • That the first device obtains, based on a first screen capture operation, the first image from the picture currently displayed on the display of the first device includes:
  • the first device obtains the first image from the picture currently displayed on the display of the first device.
  • the method further includes:
  • the first device closes the first window.
  • the display of the first device may include a plurality of windows such as a second window and a third window, and the windows may belong to different applications. For example, a window of a drawing application may be displayed on the left side of the display, and a window of an album may be displayed on a right side of the display. Therefore, to avoid confusion between an operation of obtaining the first image by the first device and another operation (for example, an operation on the album), and improve reliability of obtaining the first image, the first device may create the first window.
  • a window of a drawing application may be displayed on the left side of the display
  • a window of an album may be displayed on a right side of the display. Therefore, to avoid confusion between an operation of obtaining the first image by the first device and another operation (for example, an operation on the album), and improve reliability of obtaining the first image, the first device may create the first window.
  • the foregoing interface including the plurality of windows such as the second window and the third window jointly form the picture currently displayed on the display of the second device.
  • transparency of the first window may be obtained by the first device by receiving, in advance, a submission by a related person skilled in the art, or may be obtained by receiving a submission by the user before the first window is created.
  • the transparency of the first window may be 100%.
  • the transparency of the first window may be another value.
  • the transparency of the first window is not specifically limited in embodiments of this application.
  • the first device obtains, based on a first screen capture operation, the first image from the picture currently displayed on the display of the first device includes:
  • the first device determines a first closed area on the display of the first device based on the first screen capture operation; and the first device obtains the first image based on the first closed area.
  • the first device determines a first closed area on the display of the first device based on the first screen capture operation includes:
  • the first device determines a first position based on the first screen capture operation, and determines an area in a first frame at the first position as the first closed area, where the first frame is a preset frame; or if the first screen capture operation is a sliding operation, the first device determines, as the first closed area, a closed area formed by a sliding track of the sliding operation.
  • the user may flexibly and accurately obtain the first image in any size or any shape by sliding on the display.
  • the first position of the first closed area may be specified, and then the first closed area and the first image are quickly determined with reference to the first frame of a preset shape and size. This can also reduce difficulty in obtaining the first image.
  • the first closed area may be a maximum closed area or a minimum closed area formed by the sliding track.
  • a head and a tail of the sliding track may be connected to obtain a closed area.
  • the first frame (including the size and the shape) may be determined through presetting.
  • the first device may provide a plurality of different frames for the user in advance, and when receiving a selection operation by the user on any frame, determine the frame as the first frame.
  • the size, the shape, and a setting manner of the first frame are not specifically limited in embodiments of this application.
  • the first device obtains the first image based on the first closed area includes:
  • the first device captures the first image in the first closed area from the picture currently displayed on the display of the first device; or the first device captures, as a second image, the picture currently displayed on the display of the first device, and crops the second image based on the first closed area, to obtain the first image.
  • the first image is determined based on the first closed area, and may include less image data than the second image, so that data to be analyzed for subsequently extracting a first feature is less and the extracted first feature is more accurate. This can improve efficiency and accuracy of obtaining the first feature.
  • the second image may also be obtained for subsequently obtaining the first feature.
  • the target device is the second device. That the first device obtains a first feature in response to the first obtaining instruction includes:
  • the first device sends a first obtaining request to the second device, where the first obtaining request corresponds to the first obtaining instruction, and the first obtaining request requests to obtain an image feature from the second device; and the first device receives the first feature fed back by the second device.
  • the first image is at least a partial image in a picture currently displayed on a display of the second device, the picture is not limited by an application, and correspondingly, the first image is not limited by the first device. Therefore, the first device can obtain the first feature from the second device other than the first device. This further improves flexibility and diversity of obtained image features, and fully meets a user requirement. For example, the user may apply a color or texture of a photo in an album of a mobile phone to a drawing application of a tablet computer.
  • the first device may communicate with the second device through a distributed data exchange channel.
  • the first obtaining request may further carry a target feature type.
  • the target device is the second device. That the first device obtains a first feature in response to the first obtaining instruction includes:
  • the first device sends a second obtaining request to the second device, where the second obtaining request corresponds to the first obtaining instruction, and the second obtaining request requests to obtain an image from the second device;
  • the method before the first device obtains the first feature in response to the first obtaining instruction, the method further includes:
  • the first device receives a first setting instruction, where the first setting instruction indicates that the target device is the first device; or the first device receives a second setting instruction, where the second setting instruction indicates that the target device is the second device.
  • the first device may also determine, in another manner, that the target device is the first device or the second device, or the first device may be configured to obtain the image feature only from the first device or the second device, so that there is no need to determine whether the target device is the first device or the second device.
  • the method before the first device obtains the first feature in response to the first obtaining instruction, the method further includes:
  • the first device receives a third setting instruction, where the third setting instruction indicates to obtain a target feature type of the image feature.
  • That the first device obtains a first feature in response to the first obtaining instruction includes:
  • the first device obtains the first feature based on the target feature type.
  • the target feature type includes a color type or a texture type.
  • the first device may accurately extract the first feature of the target feature type from the first image.
  • the first device may process the first image based on at least one feature type, to obtain at least one type of first feature.
  • the method further includes:
  • the first device receives a third obtaining request of an associated third device, where the third obtaining request requests to obtain an image feature from the first device;
  • the first device may also be used as a provider of the image feature, to provide the second feature for the associated third device.
  • the first device may send the third image to the third device.
  • the third device processes the third image to obtain the second feature.
  • the first device performs an image-text editing operation based on the first feature.
  • An image-text editing operation may be performed.
  • the first device may perform the image-text editing operation in an image-text editing application based on the first feature, to apply the first feature to a new text or image, so that a text or an image obtained after the operation has the first feature.
  • the first device may add the obtained first feature to an image feature library, for example, a built-in palette or a built-in texture image library, so that the user can directly obtain the first feature from the image feature library, for example, the built-in palette or the built-in texture image library next time.
  • an image feature library for example, a built-in palette or a built-in texture image library
  • the first device or the second device may not obtain the first feature from the first image, but directly copy the first image to the image-text editing application.
  • an image feature obtaining method including:
  • a second device receives a first obtaining request sent by a first device, where the first obtaining request requests to obtain an image feature from the second device;
  • that the second device obtains a first image includes:
  • the second device obtains, based on a first screen capture operation, the first image from the picture currently displayed on the display of the second device.
  • the method before the second device obtains, based on the first screen capture operation, the first image from the picture currently displayed on the display of the second device, the method further includes:
  • the second device creates a first window based on the first obtaining request.
  • a size of the first window is the same as a size of the display of the second device, and the first window is a transparent window located over another window displayed on the display.
  • That the second device obtains, based on a first screen capture operation, the first image from the picture currently displayed on the display of the second device includes:
  • the second device obtains the first image from the picture currently displayed on the display of the first device.
  • the method further includes:
  • the second device closes the first window.
  • the second device obtains, based on a first screen capture operation, the first image from the picture currently displayed on the display of the second device includes:
  • the second device determines a first closed area on the display of the second device based on the first screen capture operation; and the second device obtains the first image based on the first closed area.
  • the second device determines a first closed area on the display of the second device based on the first screen capture operation includes:
  • the second device determines a first position based on the first screen capture operation, and determines an area in a first frame at the first position as the first closed area, where the first frame is a preset frame; or if the first screen capture operation is a sliding operation, the second device determines, as the first closed area, a closed area formed by a sliding track of the sliding operation.
  • that the second device obtains the first image based on the first closed area includes:
  • the second device captures the first image in the first closed area from the picture currently displayed on the display of the second device; and the second device captures, as a second image, the picture currently displayed on the display of the second device, and crops the second image based on the first closed area, to obtain the first image.
  • the second device when receiving the second obtaining request sent by the first device, may obtain the first image and feed back the first image to the first device, and the first device extracts the first feature from the first image.
  • an embodiment of this application provides an image feature obtaining apparatus.
  • the apparatus may be disposed in an electronic device, and the apparatus may be configured to perform the method according to any one of the first aspect and/or any one of the second aspect.
  • the apparatus may include a hand-drawing brush engine module.
  • the hand-drawing brush engine module may be configured to perform interaction between the electronic device and a user, for example, trigger the electronic device to obtain an image feature according to the method provided in embodiments of this application.
  • the apparatus may include a window management service module.
  • the window management service module may be configured to manage a life cycle of each window in the electronic device, detect a touch event for each window, and the like.
  • the electronic device may create and close a first window by using the window management service module.
  • the apparatus may include a layer composition module.
  • the layer composition module may be configured to composite obtained pictures of a plurality of windows into one image, and therefore may be configured to obtain a first image or a second image.
  • the apparatus may include a distributed task scheduling module.
  • the distributed task scheduling module may be used by the electronic device to invoke a service from another device through a distributed data exchange channel.
  • an embodiment of this application provides an electronic device, including a memory and a processor.
  • the memory is configured to store a computer program
  • the processor is configured to perform the method according to any one of the first aspect and/or any one of the second aspect when invoking the computer program.
  • an embodiment of this application provides a chip system.
  • the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory, to implement the method according to any one of the first aspect and/or any one of the second aspect.
  • the chip system may be a single chip or a chip module including a plurality of chips.
  • an embodiment of this application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program is executed by a processor, the method according to any one of the first aspect and/or any one of the second aspect is implemented.
  • an embodiment of this application provides a computer program product.
  • the computer program product runs on an electronic device, the electronic device is enabled to perform the method according to any one of the first aspect and/or the second aspect.
  • An image feature obtaining method provided in embodiments of this application may be applied to an electronic device, for example, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, or a personal digital assistant (personal digital assistant, PDA).
  • an electronic device for example, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, or a personal digital assistant (personal digital assistant, PDA).
  • PDA personal digital assistant
  • FIG. 1 is a schematic diagram of a structure of an electronic device 100 according to an embodiment of this application.
  • the electronic device 100 may include a processor 110, a memory 120, a communication module 130, a display 140, and the like.
  • the processor 110 may include one or more processing units, and the memory 120 is configured to store program code and data.
  • the processor 110 may execute computer-executable instructions stored in the memory 120, and the processor 110 is configured to control and manage an action of the electronic device 100.
  • the communication module 130 may be configured to: perform communication between internal modules of the electronic device 100, or perform communication between the electronic device 100 and another external electronic device, or the like.
  • the communication module 130 may include an interface, for example, a USB interface.
  • the USB interface may be an interface that complies with a USB standard specification, and may be specifically a mini USB interface, a micro USB interface, a USB Type C interface, or the like.
  • the USB interface may be configured to connect to a charger to charge the electronic device 100, or may be configured to perform data transmission between the electronic device 100 and a peripheral device, or may be configured to connect to a headset, to play audio by using the headset.
  • the interface may be further configured to connect to another electronic device, for example, an AR device.
  • the communication module 130 may include an audio component, a radio frequency circuit, a Bluetooth chip, a wireless fidelity (wireless fidelity, Wi-Fi) chip, a near-field communication (near-field communication, NFC) module, and the like.
  • the communication module 130 may perform interaction between the electronic device 100 and the another electronic device in a plurality of different manners.
  • the display 140 may display an image, a video, or the like on a human-computer interaction interface.
  • the electronic device 100 may further include a pressure sensor 150, configured to sense a pressure signal, and may convert the pressure signal into an electrical signal.
  • the pressure sensor 150 may be disposed on the display.
  • There are a plurality of types of pressure sensors 150 such as a resistive pressure sensor, an inductive pressure sensor, and a capacitive pressure sensor.
  • the capacitive pressure sensor may include at least two parallel plates made of conductive materials. When a force is applied to the pressure sensor, capacitance between electrodes changes. The electronic device 100 determines pressure intensity based on the change in the capacitance. When a touch operation is performed on the display, the electronic device 100 detects intensity of the touch operation through the pressure sensor 150.
  • the electronic device 100 may also calculate a touch position based on a detection signal of the pressure sensor 150.
  • touch operations that are performed in a same touch position but have different touch operation intensity may correspond to different operation instructions. For example, when a touch operation whose touch operation intensity is less than a first pressure threshold is performed on a messaging application icon, an instruction for viewing an SMS message is executed. When a touch operation whose touch operation intensity is greater than or equal to the first pressure threshold is performed on the messaging application icon, an instruction for creating a new SMS message is performed.
  • the electronic device 100 may further include a peripheral device 160, for example, a mouse, a keyboard, a loudspeaker, a microphone, or a stylus.
  • a peripheral device 160 for example, a mouse, a keyboard, a loudspeaker, a microphone, or a stylus.
  • the structure of the electronic device 100 is not specifically limited in embodiments of this application.
  • the electronic device 100 may further include more or fewer components than those shown in the figure, some components may be combined, some components may be split, or different component arrangements may be used.
  • the components shown in the figure may be implemented by using hardware, software, or a combination of software and hardware.
  • a software system of the electronic device 100 may use a layered architecture, an event-driven architecture, a microkernel architecture, a micro service architecture, or a cloud architecture.
  • FIG. 2 is a block diagram of a software structure of the electronic device 100 according to an embodiment of this application.
  • the electronic device 100 may include an application layer 210 and a system layer 220.
  • the application layer 210 may include a series of application packages.
  • the application package may include an image-text editing application, for example, a document editing application and a drawing application.
  • the image-text editing application may be used to edit a text or an image, for example, generate a text, modify a text style, or draw an image.
  • the application layer 210 may include a built-in palette 211 and a built-in texture image library 212.
  • the built-in palette 211 may include a plurality of preset color features.
  • the built-in texture image library 212 may include a plurality of texture features that are preset or that are uploaded by a user in advance.
  • the application layer 210 may include a hand-drawing brush engine module 213.
  • the hand-drawing brush engine module 213 may be configured to perform interaction between the electronic device 100 and the user, for example, trigger the electronic device 100 to obtain an image feature according to the method provided in embodiments of this application.
  • the system layer 220 may include a window management service module 221 and a layer composition module 222.
  • the window management service module 221 may be configured to manage a life cycle of each window in the electronic device 100, detect a touch event for each window, and the like.
  • the touch event may include touch coordinates, a pressure value, and the like.
  • the layer composition module 222 may be configured to composite obtained pictures of a plurality of windows into one image.
  • system layer 220 may further include a distributed task scheduling module 223.
  • the distributed task scheduling module 223 may be configured for the electronic device 100 to invoke a service from another device through a distributed data exchange channel.
  • system layer 220 may further include an image drawing module.
  • the image drawing module may be configured to draw an image on the display 140.
  • An image-text editing function is an important function of the electronic device.
  • the user may edit a text or an image on the electronic device by using the image-text editing function.
  • the user In a process of editing the text or the image, the user usually performs personalized processing on the text or the image, for example, setting the text or the image to a specific color, or drawing a specific texture in an area of the text or the image.
  • FIG. 3 is a flowchart of an image-text editing method according to an embodiment of this application.
  • the user may touch the display 140 of the electronic device 100 by using a body part or a stylus, to interact with the electronic device 100, for example, select a text area or an image area to be colored or to draw a texture in.
  • S302 The electronic device 100 processes a touch event by using the system layer 220, to obtain a touch event object.
  • the electronic device 100 may process the touch event by using the system layer 220, encapsulate coordinates and a pressure value of the touch event into the touch event object, and provide the touch event object to the application layer 210.
  • S303 The electronic device 100 performs corresponding logic processing based on the touch event object by using the application layer 210.
  • an image-text editing application for example, a drawing application or a document application
  • an image-text editing application of the application layer 210 in the electronic device 100 may perform internal logic processing of the application, for example, determine that the user opens the built-in palette 211, determine a color selected by the user in the built-in palette 211, determine that the user opens the built-in texture image library 212, or determine a texture feature selected by the user in the built-in texture image library.
  • S304 The electronic device 100 performs an image-text editing operation by using the system layer 220.
  • the electronic device 100 may perform the image-text editing operation by using the system layer 220, and display an image-text editing result on the display 140.
  • An editing operation on an image is used as an example. If coloring is performed on the image, coloring processing may be performed on the image based on a color feature determined from the built-in palette 211. If texture drawing is performed on the image, a corresponding texture may be drawn based on a texture feature determined from the built-in texture image library 212.
  • the electronic device can provide the color feature for the user only by using the built-in palette, and provide the texture feature for the user only by using the built-in texture image feature library.
  • image feature libraries such as the built-in palette and the built-in texture image feature library are usually preset by a developer of the image-text editing application, image features included in the image feature libraries are quite limited, and it is difficult to meet a user requirement.
  • an embodiment of this application provides an image feature obtaining system and method.
  • FIG. 4 is a block diagram of an image feature obtaining system according to an embodiment of this application.
  • the system may include a first device 410, and may further include a second device 420 associated with the first device 410 and a distributed data exchange channel 430 for data exchange between the first device 410 and the second device 420. That the first device 410 is associated with the second device 420 may mean that the first device 410 is being connected to or can be connected to the second device 420 through communication.
  • the first device 410 and the second device 420 may be devices currently connected by using a short-range communication technology.
  • the first device 410 and the second device 420 may be devices corresponding to a same user identifier.
  • the first device 410 may be a tablet computer of a user A
  • the second device 420 may be a mobile phone of the user A.
  • the first device 410 may include an application layer 411 and a system layer 412.
  • the application layer 411 may include a hand-drawing brush engine module 413.
  • the system layer 412 may include a window management service module 414, a layer composition module 415, and a distributed task scheduling module 416.
  • the second device 420 may include an application layer 421 and a system layer 422.
  • the application layer 421 may include a hand-drawing brush engine module 423.
  • the system layer 422 may include a window management service module 424, a layer composition module 425, and a distributed task scheduling module 426.
  • the hand-drawing brush engine module 413, the window management service module 414, the layer composition module 415, and the distributed task scheduling module 416 may be respectively similar to or the same as the hand-drawing brush engine module 213, the window management service module 221, the layer composition module 222, and the distributed task scheduling module 223 in the electronic device 100 in FIG. 2 .
  • the hand-drawing brush engine module 423, the window management service module 424, the layer composition module 425, and the distributed task scheduling module 426 may be respectively similar to or the same as the hand-drawing brush engine module 213, the window management service module 221, the layer composition module 222, and the distributed task scheduling module 223 in the electronic device 100 in FIG. 2 .
  • the hand-drawing brush engine module in the first device 410 and/or the hand-drawing brush engine module in the second device 420 may be omitted, and when the first device 410 does not need to obtain an image feature from the second device 420, the distributed task scheduling module 416 in the first device 410 may also be omitted.
  • the application layer of the first device 410 and/or the application layer of the second device 420 may also include at least one of a built-in palette and a built-in texture image library.
  • the first device 410 may obtain a first feature.
  • the first feature is a feature of a first image of a target device.
  • the target device may be the first device 410 or the second device 420.
  • the first image may be at least a partial image of a picture currently displayed by the target device.
  • Content of the picture displayed by the target device comes from a wide range of sources, and may be an interface of an application in the target device, or may be a superposition of interfaces of a plurality of applications in the target device.
  • the picture may be a frame of picture in a video that is being played in full screen, or may be a list of a plurality of photos included in an album.
  • the first image is not limited by an application in the first device 410 or the first device 410.
  • the first features that may be included in the first image are very flexible and diversified. This greatly improves flexibility and diversity of obtained image features.
  • the user may open a favorite photo on a display of the first device 410, so that the picture currently displayed on the display includes the photo. Then, the first image is obtained, and the first feature is obtained from the first image. To be specific, the image feature can be quickly obtained from the favorite photo of the user. In this way, a user requirement can be fully met.
  • FIG. 5 is a flowchart of an image feature obtaining method according to an embodiment of this application. It should be noted that the method is not limited to a specific sequence described in FIG. 5 and the following. It should be understood that, in other embodiments, sequences of some steps in the method may be exchanged according to an actual requirement, or some steps in the method may be omitted or deleted. The method may be applied to the first device or interaction between the first device and the second device in FIG. 4 , and includes the following steps.
  • S501 The first device receives a first obtaining instruction.
  • the first obtaining instruction indicates the first device to obtain an image feature.
  • the image feature may be a visual feature of an image, and the image feature may be used to edit a text or an image.
  • the text or the image may be enabled to have the image feature.
  • a feature type of the image feature may include a color type and a texture type.
  • the feature type of the image feature may further include another feature type, for example, at least one of a shape type and a spatial relationship type.
  • the first device may provide, for a user through a human-computer interaction interface, a control used to trigger obtaining of the image feature, and receive, based on the control, the first obtaining instruction submitted by the user.
  • the display of the first device is a touchscreen, and the user may tap or slide on the screen by using a finger or a stylus, to interact with the first device.
  • the lower left corner of the first device includes an "Obtain an image feature" button.
  • the first device may determine that the first obtaining instruction is received.
  • the image feature may include a plurality of feature types. Therefore, to improve accuracy of obtained image features, the first device may receive a third setting instruction submitted by the user, where the third setting instruction indicates a target feature type.
  • the target feature type may include a color type or a texture type. In some other embodiments, the target feature type may also be carried in the first obtaining instruction.
  • the first device may continue to provide, for the user, a second-level menu used to determine a feature type, as shown in FIG. 7 .
  • the second-level menu includes a plurality of to-be-selected feature types.
  • the first device may receive a first setting instruction or a second setting instruction submitted by the user.
  • the first setting instruction may carry a device identifier of the first device, and indicates that a target device for obtaining the image feature is the first device.
  • the second setting instruction may carry a device identifier of the second device, and indicates that the target device for obtaining the image feature is the second device.
  • the device identifier of the first device or the device identifier of the second device may also be carried in the first obtaining instruction.
  • the first device may continue to display a device selection interface shown in FIG. 8 .
  • the device selection interface includes at least one device identifier.
  • the first device may determine that an electronic device corresponding to the device identifier is the target device. For example, when receiving a tap operation on the device identifier of the first device, the first device may determine that the first setting instruction is received, and that the first device is the target device.
  • the first device may determine that the second setting instruction is received, and that the second device is the target device.
  • the second device may be a device associated with the first device.
  • the first device may receive more other indication information that indicates the image feature obtaining manner.
  • the indication information may be separately indicated by a separate setting instruction, or may be carried in the first obtaining instruction.
  • a manner in which the first device receives the indication information that indicates the image feature obtaining manner is not specifically limited in embodiments of this application.
  • S502 The first device determines whether to obtain the image feature in a cross-device manner. If in the cross-device manner, S506 is performed. If not in the cross-device manner, S503 is performed.
  • the first device may determine whether to obtain the image feature in the cross-device manner.
  • the first device may determine to obtain the image feature in the cross-device manner.
  • the first device determines to extract the image feature in the cross-device manner from the second device corresponding to the received device identifier.
  • the first device may determine whether the first setting instruction, the second setting instruction, or the first obtaining instruction carries the device identifier. If the first setting instruction or the first obtaining instruction does not carry any device identifier or carries the device identifier of the first device, it may be determined not to extract the image feature in the cross-device manner. If the second setting instruction or the first obtaining instruction carries a device identifier, and the device identifier is not the device identifier of the first device, the image feature is to be extracted, in the cross-device manner, from the second device corresponding to the received device identifier.
  • the first device may also be configured to obtain the image feature only from the local end or obtain the image feature only from the second device. Therefore, S502 may not be performed, that is, S502 is an optional step.
  • S503 The first device creates a first window.
  • the display of the first device may include a plurality of windows such as a second window and a third window, and the windows may belong to different applications. As shown in FIG. 6 to FIG. 8 , a window of a drawing application may be displayed on the left side of the display, and a window of an album may be displayed on the right side of the display. Therefore, to avoid confusion between an operation of obtaining a first image by the first device and another operation (for example, an operation on the album), and improve reliability of obtaining the first image, the first device may create the first window by using the foregoing window management service module.
  • a size of the first window may be the same as a size of the display of the first device, and the first window is a transparent window located over another window displayed on the display, that is, the first window is a global transparent window located over all applications of the first device.
  • transparency of the first window may be obtained by the first device by receiving, in advance, a submission by a related person skilled in the art, or may be obtained by receiving a submission by the user before the first window is created.
  • the transparency of the first window may be 100%.
  • the transparency of the first window may be another value.
  • the transparency of the first window is not specifically limited in embodiments of this application.
  • FIG. 9 a schematic diagram of a display scenario may be shown in FIG. 9 .
  • the scenario includes a first window 901 at a top layer.
  • the first window is a global transparent window and transparency of the first window is 100%, and an original display interface of the first device is located under the first window, and includes a second window 902 and a third window 903.
  • the second window 902 is the window of the drawing application shown in FIG. 6 to FIG. 8
  • the third window is the window of the album shown in FIG. 6 to FIG. 8 .
  • the first device may obtain the first image in another manner. Therefore, S503 may not be performed, that is, S503 is an optional step.
  • S504 The first device obtains the first image.
  • the first image may be at least a partial image in a picture currently displayed on the display of the first device.
  • the first device may obtain, based on a first screen capture operation, the first image from the picture currently displayed on the display of the first device. In some embodiments, when the first device creates the first window, the first device may receive the first screen capture operation on the first window.
  • the user may set an area range of the to-be-obtained first image by using the first screen capture operation.
  • the first device may determine a first closed area on the display of the first device based on the first screen capture operation. An image in the first closed area is the first image that the user is to obtain.
  • the first screen capture operation may be used to directly determine the first closed area.
  • the first screen capture operation may include a sliding operation.
  • the first device may determine, as the first closed area, a closed area formed by a sliding track of the sliding operation.
  • the closed area may be a maximum closed area or a minimum closed area formed by the sliding track.
  • the user may flexibly and accurately obtain the first image in any size or any shape by sliding on the display.
  • a photo on the upper right of the display of the first device includes river banks on two sides and a jumping person on the upper side.
  • the user draws an irregular first closed area 1001 in the lower right corner of the photo by using the stylus, where the first closed area 1001 includes the river bank on the right side.
  • the head and the tail of the sliding track may be connected to obtain a closed area.
  • the first screen capture operation may be used to determine a first position of the first closed area on the display, and a preset first frame may be used to determine a size and a shape of the first closed area.
  • the first device may determine the first position based on the first screen capture operation, and determine an area in the first frame at the first position as the first closed area. Because the user does not need to draw the first closed area, difficulty in obtaining the first image can be reduced.
  • the first frame is a circular frame with a diameter of 3 cm.
  • a photo in the lower right corner of the display of the first device includes a half-body photo of a person.
  • the user taps the display or slides on the display.
  • a position of a tapped position or an end point of a sliding track is the first position.
  • the first device generates a circular frame with a diameter of 3 cm at the first position.
  • An area in the circular frame is the first closed area 1001, and the first closed area 1001 includes a portrait of the person.
  • the first frame (including the size and the shape) may be determined through presetting.
  • the first device may provide a plurality of different frames for the user in advance, and when receiving a selection operation by the user on any frame, determine the frame as the first frame.
  • the size, the shape, and a setting manner of the first frame are not specifically limited in embodiments of this application.
  • the first screen capture operation may alternatively include an operation in another manner, provided that the first closed area can be determined.
  • An operation manner of the first screen capture operation is not specifically limited in embodiments of this application.
  • the first device may obtain the first image based on the first closed area.
  • the first device may capture, as a second image, the picture currently displayed on the display of the first device, and crop the second image based on the first closed area, to obtain the first image.
  • the first device may first capture a screen of an entire interface on the display of the first device, and then obtain, through cropping based on the first closed area, the first image from the second image obtained through screen capture.
  • the first device may capture the first image in the first closed area from the picture currently displayed on the display of the first device.
  • the first device may determine, based on a position relationship between each window and the first closed area, a picture of at least one window that matches the first closed area, and compose, based on an upper level-lower level relationship between the at least one window, the picture of the at least one window into the first image by using the layer composition module.
  • the first image is determined based on the first closed area, and may include less image data than the second image, so that data to be analyzed for subsequently extracting a first feature is less and the extracted first feature is more accurate. This can improve efficiency and accuracy of obtaining the first feature.
  • the second image may also be obtained for subsequently obtaining the first feature.
  • the first device may close the first window after obtaining the first image, so that the user can continue to interact with another window subsequently.
  • S505 The first device obtains the first feature. Then, the first device may perform S511.
  • the first device may perform analysis processing on the first image, to extract the first feature.
  • the first image is at least a partial image of the picture currently displayed on the display of the first device, the picture is not limited by an application, and correspondingly, the first image is not limited by an application in the first device. Therefore, the first feature can be obtained from a source outside a preset image feature library, for example, an area outside an interface of an image-text editing application. Therefore, flexibility and diversity of obtained image features are improved, and a user requirement can be fully met. In addition, in comparison with a manner in which the user uploads the image feature to the first device from outside the first device, an operation is simpler.
  • the first device may process the first image based on the target feature type, to obtain the first feature of the target feature type.
  • the first device may perform type analysis on a color of the first image, and the obtained first feature is a feature of the color type, for example, a red green blue (red green blue, RGB) value.
  • the feature type carried in the first obtaining instruction is a texture type
  • the first device may perform type analysis on a texture of the first image, and the obtained first feature is a feature of the texture type.
  • the first device may process the first image based on at least one feature type, to obtain at least one type of first feature.
  • the first device may analyze the first image by using a color histogram, a color set, a color moment, a color aggregation vector, a color correlogram, or the like.
  • the first device may analyze the first image by using a statistics method, a geometry method, a model method, a signal processing method, or the like, or may perform blurring, noise reduction, or salt adding on the first image, or the like.
  • the first device may analyze the first image by using a boundary feature method, a Fourier shape description method, a geometric parameter method, a shape invariant moment method, or the like.
  • the first device may segment the first image into a plurality of image blocks, then extract a feature of each image block, and establish an index.
  • the first device may alternatively process the first image in another manner, to obtain the first feature.
  • the manner of obtaining the first feature from the first image is not specifically limited in embodiments of this application.
  • S506 The first device sends a first obtaining request to the second device.
  • the first device may send, to the second device by using the distributed data exchange channel, the first obtaining request corresponding to the first obtaining instruction, to request the second device to obtain the image feature.
  • the first obtaining request may carry the target feature type.
  • the first device may establish the distributed data exchange channel with the second device, and exchange data with the second device through the distributed data exchange channel, including sending the first obtaining request to the second device and subsequently receiving data fed back by the second device.
  • the second device when receiving the first obtaining request sent by the second device, may display first notification information.
  • the first notification information is used to notify that the first obtaining request of the first device is to be responded to.
  • the second device may display an interface shown in FIG. 12 .
  • the top of the interface includes a first notification message, content of the message is "About to extract an image feature for a first device", and the top of the interface further includes an accept button and a reject button. If a tap operation by the user is received on the accept button, the following steps may continue to be performed. If a tap operation by the user is received on the reject button, subsequent operations may be stopped.
  • the first device may alternatively send a second obtaining request to the second device.
  • the second obtaining request is used by the second device to obtain the image used to obtain an image feature.
  • S507 The second device creates a first window.
  • a manner in which the second device creates the first window may be the same as the manner in which the first device creates the first window in S503. Details are not described herein again.
  • S508 The second device obtains a first image.
  • a manner in which the second device obtains the first image may be the same as the manner in which the first device obtains the first image in S504. Details are not described herein again.
  • the second device obtains a first feature.
  • a manner in which the second device obtains the first feature may be the same as the manner in which the first device obtains the first feature in S505. Details are not described herein again.
  • S510 The second device sends the first feature to the first device.
  • S511 may be performed.
  • the second device may send the first feature to the first device based on the foregoing distributed data exchange channel.
  • the second device may not perform S509, and feeds back the first image to the first device in S510.
  • the first device may perform S505 when receiving the first image, to extract the first feature.
  • the first device may obtain the first feature from the second device.
  • the first image is at least a partial image in a picture currently displayed on a display of the second device, the picture is not limited by an application, and correspondingly, the first image is not limited by the first device. Therefore, the first device can obtain the first feature from the second device other than the first device. This further improves flexibility and diversity of obtained image features, and fully meets a user requirement. For example, the user may apply a color or texture of a photo in an album of a mobile phone to a drawing application of a tablet computer.
  • the first device performs an image-text editing operation based on the first feature.
  • the first device may perform the image-text editing operation in an image-text editing application based on the first feature, to apply the first feature to a new text or image, so that an operated object obtained after the operation has the first feature.
  • the first device may bind the first feature to the stylus. If the first device detects a drawing operation by the stylus, the first device sets, as the first feature, an image feature of a text or an image drawn by using the drawing operation.
  • the first device may bind the RGB value to the stylus.
  • a color of a drawn track is a color indicated by the RGB value.
  • the first feature is a texture feature, and the first device may bind the texture feature to the stylus.
  • a texture feature of a drawn track is the texture feature bound to the stylus.
  • the first device or the second device may not obtain the first feature from the first image, but directly copy the first image to the image-text editing application.
  • the first device may not perform S511 to immediately apply the first feature, that is, S511 is an optional step.
  • the first device may add the obtained first feature to an image feature library, for example, a built-in palette or a built-in texture image library, so that the user can directly obtain the first feature from the image feature library, for example, the built-in palette or the built-in texture image library next time.
  • an image feature library for example, a built-in palette or a built-in texture image library
  • the first device may also provide, in a manner similar to that used by the second device, a second feature for a third device associated with the first device.
  • the first device may receive a third obtaining request of the third device.
  • the third obtaining request requests to obtain an image feature from the first device.
  • the first device obtains a third image.
  • the third image may be at least a partial image in the picture currently displayed on the display of the first device.
  • the first device extracts the second feature from the third image, and feeds back the second feature to the third device.
  • the first device may receive a fourth obtaining request from the third device.
  • the fourth obtaining request requests to obtain, from the first device, an image used to obtain an image feature.
  • the first device obtains the third image, and feeds back the third image to the third device.
  • the third device extracts the second feature from the third image.
  • the first device may obtain the first feature.
  • the first feature is the feature of the first image of the target device
  • the target device may be the first device, or may be the second device associated with the first device
  • the first image may be at least a partial image of the picture currently displayed on the display of the target device.
  • Content of the picture comes from a wide range of sources, and may be an interface of an application in the target device, or may be a superposition of interfaces of a plurality of applications in the target device.
  • the picture may be a frame of picture in a video that is being played, or may be a list of a plurality of photos included in the album. Therefore, the first image is not limited by an application or the first device, and the first features that may be included in the first image are very flexible and diversified. This greatly improves flexibility and diversity of obtained image features, and fully meets a user requirement.
  • FIG. 13 is a schematic diagram of a structure of an electronic device 1300 according to an embodiment of this application.
  • the electronic device provided in embodiments includes a memory 1310 and a processor 1320.
  • the memory 1310 is configured to store a computer program.
  • the processor 1320 is configured to perform the methods described in the foregoing method embodiments when invoking the computer program.
  • the electronic device provided in embodiments may perform the foregoing method embodiment. An implementation principle and technical effects of the electronic device are similar. Details are not described herein again.
  • an embodiment of this application further provides a chip system.
  • the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory, to implement the method in the foregoing method embodiment.
  • the chip system may be a single chip or a chip module including a plurality of chips.
  • An embodiment of this application further provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method in the foregoing method embodiment is implemented.
  • An embodiment of this application further provides a computer program product.
  • the computer program product is run on an electronic device, the electronic device is enabled to perform the method in the foregoing method embodiments.
  • the integrated unit When the foregoing integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, all or some of the procedures of the method in embodiments of this application may be implemented by a program instructing related hardware.
  • the computer program may be stored in a computer-readable storage medium. When the computer program is executed by the processor, steps of the foregoing method embodiments may be implemented.
  • the computer program includes computer program code.
  • the computer program code may be in a source code form, an object code form, an executable file form, some intermediate forms, or the like.
  • the computer-readable storage medium may include at least: any entity or apparatus that can carry computer program code to a photographing apparatus/a terminal device, a recording medium, a computer memory, a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), an electrical carrier signal, a telecommunication signal, and a software distribution medium, for example, a USB flash drive, a removable hard disk, a magnetic disk, or an optical disk.
  • the computer-readable medium cannot be an electrical carrier signal or a telecommunication signal according to legislation and patent practices.
  • the disclosed apparatus/ device and method may be implemented in other manners.
  • the described apparatus/device embodiment is merely an example.
  • division into the modules or units is merely logical function division and may be other division in an actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the term “if” may be interpreted as “when” or “once” or “in response to determining” or “in response to detecting”.
  • the phrase “if it is determined that” or “if (a described condition or event) is detected” may be interpreted as a meaning of "once it is determined that” or “in response to determining” or “once (a described condition or event) is detected” or “in response to detecting (a described condition or event)” depending on the context.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
EP22827120.1A 2021-06-25 2022-04-06 Procédé d'obtention de caractéristique d'image et dispositif électronique Pending EP4343520A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110713551.2A CN115525183A (zh) 2021-06-25 2021-06-25 获取图像特征的方法及电子设备
PCT/CN2022/085325 WO2022267617A1 (fr) 2021-06-25 2022-04-06 Procédé d'obtention de caractéristique d'image et dispositif électronique

Publications (1)

Publication Number Publication Date
EP4343520A1 true EP4343520A1 (fr) 2024-03-27

Family

ID=84545203

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22827120.1A Pending EP4343520A1 (fr) 2021-06-25 2022-04-06 Procédé d'obtention de caractéristique d'image et dispositif électronique

Country Status (3)

Country Link
EP (1) EP4343520A1 (fr)
CN (1) CN115525183A (fr)
WO (1) WO2022267617A1 (fr)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500066B (zh) * 2013-09-30 2019-12-24 北京奇虎科技有限公司 一种适用于触屏设备的截图装置和方法
US10365879B2 (en) * 2014-11-05 2019-07-30 Lg Electronics Inc. Image output device, mobile terminal, and method for controlling a plurality of image output devices
CN106033305A (zh) * 2015-03-20 2016-10-19 广州金山移动科技有限公司 一种屏幕取色方法及装置
CN105242920B (zh) * 2015-09-21 2019-03-29 联想(北京)有限公司 一种截图系统、截图方法以及电子设备
CN109299310A (zh) * 2018-12-05 2019-02-01 王相军 一种屏幕图像取色和搜索方法及系统
CN111596848A (zh) * 2020-05-09 2020-08-28 远光软件股份有限公司 一种界面取色方法、装置、设备及存储介质

Also Published As

Publication number Publication date
WO2022267617A1 (fr) 2022-12-29
CN115525183A (zh) 2022-12-27

Similar Documents

Publication Publication Date Title
US11922005B2 (en) Screen capture method and related device
US20210294429A1 (en) Apparatus, method and recording medium for controlling user interface using input image
KR102135215B1 (ko) 정보 처리 방법 및 단말
EP4002066A1 (fr) Procédé d'interaction par commande gestuelle dans l'air et dispositif électronique associé
CN110100251B (zh) 用于处理文档的设备、方法和计算机可读存储介质
EP3195601B1 (fr) Procédé de fourniture d'une image visuelle d'un son et dispositif électronique mettant en oeuvre le procédé
US9479693B2 (en) Method and mobile terminal apparatus for displaying specialized visual guides for photography
KR102013331B1 (ko) 듀얼 카메라를 구비하는 휴대 단말기의 이미지 합성 장치 및 방법
WO2021179803A1 (fr) Procédé et appareil de partage de contenu, dispositif électronique et support de stockage
US20230367464A1 (en) Multi-Application Interaction Method
WO2021169466A1 (fr) Procédé de collecte d'informations, dispositif électronique et support de stockage lisible par ordinateur
CN107079084A (zh) 即时预览控制装置、即时预览控制方法、即时预览系统及程序
KR101631966B1 (ko) 이동 단말기 및 이의 제어방법
US9195310B2 (en) Camera cursor system
EP4343520A1 (fr) Procédé d'obtention de caractéristique d'image et dispositif électronique
KR102076629B1 (ko) 휴대 장치에 의해 촬영된 이미지들을 편집하는 방법 및 이를 위한 휴대 장치
WO2022042285A1 (fr) Procédé d'affichage d'interface de programme d'application et dispositif électronique
CN115185440A (zh) 一种控件显示方法及相关设备
CN116048349B (zh) 一种图片显示方法、装置及终端设备
WO2024125301A1 (fr) Procédé d'affichage et dispositif électronique
KR20200121261A (ko) 입력 영상을 이용한 사용자 인터페이스 제어 방법, 장치 및 기록매체
CN118276747A (zh) 一种图片显示方法、装置及终端设备
CN116469032A (zh) 幻灯片的标题识别方法、装置及存储介质
CN112230906A (zh) 列表控件的创建方法、装置、设备及可读存储介质
CN109992123A (zh) 输入方法、装置和机器可读介质

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231219

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR