WO2022267617A1 - 获取图像特征的方法及电子设备 - Google Patents

获取图像特征的方法及电子设备 Download PDF

Info

Publication number
WO2022267617A1
WO2022267617A1 PCT/CN2022/085325 CN2022085325W WO2022267617A1 WO 2022267617 A1 WO2022267617 A1 WO 2022267617A1 CN 2022085325 W CN2022085325 W CN 2022085325W WO 2022267617 A1 WO2022267617 A1 WO 2022267617A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
display screen
window
target
Prior art date
Application number
PCT/CN2022/085325
Other languages
English (en)
French (fr)
Inventor
王亚明
王劲飞
王应文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22827120.1A priority Critical patent/EP4343520A1/en
Publication of WO2022267617A1 publication Critical patent/WO2022267617A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • the present application relates to the field of terminals, in particular to a method for acquiring image features and electronic equipment.
  • graphic and text editing functions such as handwriting and drawing enable users to handwrite text or draw images on the screen of an electronic device, which is convenient for users to record information or perform artistic creation.
  • image-text editing function the user may need a brush of a specific color, or need to draw a specific texture, and accordingly, the electronic device needs to provide image features such as these colors and textures to the user.
  • an electronic device may provide a user with a preset image feature library, and the image feature library may include various preset image features.
  • the electronic device receives the user's selection operation based on any image feature, it determines that the image feature is the image feature selected by the user, and then can perform other subsequent operations such as graphic editing based on the image feature selected by the user.
  • the image features included in the preset image feature library are usually very limited. Taking color features as an example, electronic devices may only provide users with several commonly used color features, but in fact, the variety of color features may be infinite. Endless, therefore, it is obvious that this way of providing image features in the prior art has relatively large limitations, and it is difficult to meet user needs.
  • the present application provides a method for acquiring image features and an electronic device, which can improve the flexibility and diversity of acquiring image features.
  • the embodiment of the present application provides a method for acquiring image features, including:
  • the first device receives a first acquisition instruction, where the first acquisition instruction is used to instruct the first device to acquire image features;
  • the first device acquires a first feature in response to the first acquisition instruction, the first feature is a feature of a first image of a target device, and the target device is the first device or is related to the first device
  • the first image is at least a part of the image currently displayed on the display screen of the target device.
  • the image feature is a visual feature of the image, and the image feature can be used to edit the text or the image.
  • the text or image can be made to have the image feature.
  • the association between the first device and the second device may mean that the first device and the second device are or can be connected through communication.
  • the first device and the second device may be devices currently connected through short-range communication technology.
  • the first device and the second device may be devices corresponding to the same user identifier.
  • the first device may be user A's tablet computer, and the second device may be user A's mobile phone.
  • the first device may acquire the first feature, where the first feature is a feature of the first image of the target device.
  • the target device may be the first device, or may be a second device associated with the first device, and the first image may be at least part of an image currently displayed on a display screen of the target device. Since the content of this screen comes from a wide range of sources, it may be the interface of an application program on the target device, or it may be the superposition of the interfaces of multiple application programs on the target device.
  • the screen may be a frame of a video being played , it may also be that the album includes a list of multiple photos, therefore, the first image will not be limited by a certain application program or the first device itself, and the first features that the first image may include are also extremely flexible and diverse, Therefore, the flexibility and diversity of acquiring image features are greatly improved, and the needs of users can be fully met.
  • the first feature may be a color-type feature or a texture-type feature.
  • the target device is the first device, and the first device acquires the first feature in response to the first acquisition instruction, including:
  • the first device acquires the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation;
  • the first device extracts the first feature from the first image.
  • the first image is at least part of the image currently displayed on the display screen of the first device, and the screen will not be restricted by a certain application program, correspondingly, the first image will not be restricted by a certain application program in the first device.
  • the limitation of the application program makes it possible to obtain the first feature from sources other than the preset image feature library, such as the area outside the interface of the graphic editing application program, etc., thus improving the flexibility and variety of image feature acquisition , which can fully meet the needs of users.
  • the operation is simpler.
  • the method further includes:
  • the first device creates a first window based on the first acquisition instruction, the size of the first window is the same as the size of the display screen of the first device, and the first window is located on the display screen A transparent window on top of other displayed windows;
  • the first device acquires the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation, including:
  • the first device receives the first screenshot operation based on the first window, acquire the first image from the screen currently displayed on the display screen of the first device;
  • the method further includes:
  • the first device closes the first window.
  • each window may belong to a different application program.
  • the left side of the display screen may be a drawing program window
  • the right side may be The window of the photo album, so in order to prevent the first device from confusing the operation of obtaining the first image with other operations (such as operations on the photo album) and improve the reliability of obtaining the first image, the first device may create the first window.
  • the above-mentioned interface including multiple windows such as the second window and the third window together constitute the screen currently displayed on the display screen of the second device.
  • the transparency of the first window may be obtained by the first device receiving a submission from a relevant technical person in advance, or may be obtained from a user's submission before creating the first window.
  • the transparency of the first window may be 100%.
  • the transparency of the first window may also be other values, and this embodiment of the present application does not specifically limit the transparency of the first window.
  • the first device acquires the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation, including:
  • the first device determines a first enclosed area on the display screen of the first device based on the first screenshot operation
  • the first device acquires the first image based on the first enclosed area.
  • the first device determines the first enclosed area on the display screen of the first device based on the first screenshot operation, including:
  • the first device determines a first position based on the first screenshot operation, and determines an area within a first border at the first position as the first closed area, and the first border is a preset the border of the ; or,
  • the first screenshot operation is a sliding operation, and the first device determines a closed area formed by a sliding track of the sliding operation as the first closed area.
  • the user can flexibly and accurately obtain the first image of any size and shape by sliding on the display screen.
  • the first enclosed area may be the largest enclosed area or the smallest enclosed area formed by the sliding track.
  • the ends of the slide tracks may be connected to obtain a closed area.
  • the first frame (including size and shape) can be determined by setting in advance.
  • the first device may provide a plurality of different frames to the user in advance, and when a user's selection operation is received based on any frame, the frame is determined as the first frame.
  • the embodiment of the present application does not specifically limit the size, shape, and setting manner of the first frame.
  • the acquiring the first image by the first device based on the first enclosed area includes:
  • the first device intercepts the first image in the first closed area from the frame currently displayed on the display screen of the first device; or,
  • the first device captures the frame currently displayed on the display screen of the first device as a second image, and crops the second image based on the first closed area to obtain the first image.
  • the determination of the first image according to the first closed area can include less image data than the second image, which can make the data required for subsequent extraction of the first feature less and more accurate, and can improve the efficiency and efficiency of acquiring the first feature. accuracy.
  • the second image may also be acquired for subsequent acquisition of the first feature.
  • the target device is the second device
  • the first device acquires the first feature in response to the first acquisition instruction, including:
  • the first device sends a first acquisition request to the second device, the first acquisition request corresponds to the first acquisition instruction, and the first acquisition request is used to request to acquire image features from the second device ;
  • the first device receives the first characteristic fed back by the second device.
  • the first image is at least a part of the image currently displayed on the display screen of the second device, and the screen will not be restricted by an application program, correspondingly, the first image will not be restricted by the first device itself , so that the first device can obtain the first feature from a second device other than the first device, which further improves the flexibility and diversity of image features obtained, and can fully meet user needs.
  • a user can apply the color or texture of a photo in the camera roll of the mobile phone to the drawing program of the tablet computer.
  • the first device may communicate with the second device through the distributed data interaction channel.
  • the first acquisition request may also carry the target feature type.
  • the target device is the second device
  • the first device acquires the first feature in response to the first acquisition instruction, including:
  • the first device sends a second acquisition request to the second device, the second acquisition request corresponds to the first acquisition instruction, and the second acquisition request is used to request to acquire an image from the second device;
  • the first device receives the first image fed back by the second device
  • the first device extracts the first feature from the first image.
  • the method before the first device acquires the first feature in response to the first acquisition instruction, the method further includes:
  • the first device receives a first setting instruction, where the first setting instruction is used to indicate that the target device is the first device; or,
  • the first device receives a second setting instruction, where the second setting instruction is used to indicate that the target device is the second device.
  • the first device may also determine that the target device is the first device or the second device in other ways, or the first device may also be configured to obtain image features only from the first device or the second device, so that there is no need to It is determined whether the target device is the first device or the second device.
  • the method before the first device acquires the first feature in response to the first acquisition instruction, the method further includes:
  • the first device receives a third setting instruction, and the third setting instruction is used to indicate a target feature type for acquiring image features;
  • the first device acquires the first feature, including:
  • the first device acquires the first feature based on the target feature type.
  • the target feature type includes color type or texture type.
  • the first device can accurately extract the first feature of the target feature type from the first image.
  • the first device may process the first image based on at least one feature type, so as to obtain at least one type of first feature.
  • the method also includes:
  • the first device receives a third acquisition request from an associated third device, and the third acquisition request is used to request to acquire image features from the first device;
  • the first device acquires a third image, and the third image is at least part of an image currently displayed on a display screen of the first device;
  • said first device extracts a second feature from said third image
  • the first device feeds back the second feature to the third device.
  • the first device may also act as a provider of the image feature, thereby providing the second feature to the associated third device. And in some embodiments, the first device may send the third image to the third device, and correspondingly, the third device processes the third image to obtain the second feature.
  • the first device performs an image-text editing operation based on the first feature.
  • the first device may perform graphic editing operations based on the first feature in the graphic editing program, so as to apply the first feature to new text or images, so that the operated text or image has the first feature.
  • the first device may add the obtained first feature to the image feature library such as the built-in palette or the built-in texture image library, so that the user can directly select from the built-in palette or the built-in texture image next time library and other image feature libraries to obtain the first feature.
  • the image feature library such as the built-in palette or the built-in texture image library
  • the first device or the second device may not obtain the first feature from the first image, but directly copy the first image to a graphic editing program.
  • the embodiment of the present application provides a method for acquiring image features, including:
  • the second device receives a first acquisition request sent by the first device, where the first acquisition request is used to request to acquire image features from the second device;
  • the second device acquires a first image, and the first image is at least part of an image currently displayed on a display screen of the second device;
  • said second device extracts said first feature from said first image
  • the second device feeds back the first feature to the first device.
  • the acquiring the first image by the second device includes:
  • the second device acquires the first image from the screen currently displayed on the display screen of the second device.
  • the method further includes:
  • the second device creates a first window based on the first acquisition request, the size of the first window is the same as the size of the display screen of the second device, and the first window is located on the display screen A transparent window on top of other displayed windows;
  • the second device acquires the first image from the screen currently displayed on the display screen of the second device based on the first screenshot operation, including:
  • the second device receives the first screenshot operation based on the first window, acquire the first image from the screen currently displayed on the display screen of the second device;
  • the method further includes:
  • the second device closes the first window.
  • the second device acquires the first image from the screen currently displayed on the display screen of the second device based on the first screenshot operation, including:
  • the second device determines a first enclosed area on the display screen of the second device based on the first screenshot operation
  • the second device acquires the first image based on the first enclosed area.
  • the second device determines the first enclosed area on the display screen of the second device based on the first screenshot operation, including:
  • the second device determines a first position based on the first screenshot operation, and determines an area within a first frame at the first position as the first closed area, and the first frame is a preset the border of the ; or,
  • the first screenshot operation is a sliding operation
  • the second device determines a closed area formed by a sliding track of the sliding operation as the first closed area.
  • the acquiring the first image by the second device based on the first enclosed area includes:
  • the second device intercepts the first image in the first enclosed area from the frame currently displayed on the display screen of the second device; or,
  • the second device captures the frame currently displayed on the display screen of the second device as a second image, and crops the second image based on the first closed area to obtain the first image.
  • the second device may also acquire and feed back the first image to the first device when receiving the second acquisition request sent by the first device, and the first device extracts the first feature from the first image.
  • the embodiment of the present application provides a device for acquiring image features, the device can be set in an electronic device, and the device can be used to perform any one of the first aspect and/or any one of the second aspect method described in the item.
  • the device may include a hand-painted brush engine module.
  • the hand-painted brush engine module can be used for interaction between the electronic device and the user, such as triggering the electronic device to acquire image features according to the method provided by the embodiment of the present application.
  • the device may include a window management service module.
  • the window management service module can be used to manage the life cycle of each window in the electronic device, detect touch events for each window, and so on.
  • the electronic device can create and close the first window through the window management service module.
  • the device may include a layer composition module.
  • the layer compositing module can be used to synthesize the pictures obtained from multiple windows into one image, and thus can be used to obtain the first image or the second image.
  • the device may include a distributed task scheduling module.
  • the distributed task scheduling module can be used for electronic devices to call services from other devices through distributed data interaction channels.
  • an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory is used to store a computer program; the processor is used to execute any one of the above-mentioned first aspect and/or the second method when calling the computer program. The method described in any one of the aspects.
  • an embodiment of the present application provides a chip system, the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement any of the above-mentioned first aspects.
  • the processor executes a computer program stored in the memory to implement any of the above-mentioned first aspects.
  • the chip system may be a single chip, or a chip module composed of multiple chips.
  • the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, any one of the above-mentioned first aspect and/or any one of the second aspect can be realized. described method.
  • the embodiment of the present application provides a computer program product, which, when the computer program product is run on the electronic device, enables the electronic device to execute any one of the above-mentioned first aspect and/or any one of the second aspect. method.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a software structural block diagram of an electronic device provided in an embodiment of the present application.
  • FIG. 3 is a flow chart of a method for editing images and texts provided in the embodiment of the present application.
  • Fig. 4 is a block diagram of a system for acquiring image features provided by an embodiment of the present application
  • FIG. 5 is a flow chart of a method for acquiring image features provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a display interface of an electronic device provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a display scene provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a first enclosed area provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of another first enclosed area provided by the embodiment of the present application.
  • FIG. 12 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the method for acquiring image features provided by the embodiment of the present application can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, notebook computers, ultra-mobile personal computers (ultra-mobile personal computers, UMPCs), netbooks, personal digital assistants (personal digital assistants) digital assistant, PDA) and other electronic devices, the embodiments of the present application do not impose any restrictions on the specific types of electronic devices.
  • FIG. 1 is a schematic structural diagram of an example of an electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, a memory 120, a communication module 130, a display screen 140, and the like.
  • the processor 110 may include one or more processing units, and the memory 120 is used for storing program codes and data.
  • the processor 110 may execute computer-executed instructions stored in the memory 120 for controlling and managing the actions of the electronic device 100 .
  • the communication module 130 may be used for communication between various internal modules of the electronic device 100, or communication between the electronic device 100 and other external electronic devices, and the like. Exemplarily, if the electronic device 100 communicates with other electronic devices through a wired connection, the communication module 130 may include an interface, etc., such as a USB interface. USB interface, USB Type C interface, etc.
  • the USB interface can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
  • the communication module 130 may include an audio device, a radio frequency circuit, a Bluetooth chip, a wireless fidelity (Wi-Fi) chip, a near-field communication (near-field communication, NFC) module, etc.
  • Wi-Fi wireless fidelity
  • NFC near-field communication
  • the display screen 140 may display images or videos in the human-computer interaction interface.
  • the electronic device 100 may further include a pressure sensor 150 for sensing a pressure signal, and may convert the pressure signal into an electrical signal.
  • the pressure sensor 150 can be disposed on the display screen.
  • pressure sensors 150 such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of pressure according to the change in capacitance. When a touch operation acts on the display screen, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 150 .
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 150 .
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the electronic device 100 may also include a peripheral device 160, such as a mouse, a keyboard, a speaker, a microphone, a stylus, and the like.
  • a peripheral device 160 such as a mouse, a keyboard, a speaker, a microphone, a stylus, and the like.
  • the embodiment of the present application does not specifically limit the structure of the electronic device 100 .
  • the electronic device 100 may also include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture. Please refer to FIG. 2 .
  • FIG. 2 is a software structural block diagram of the electronic device 100 according to the embodiment of the present application.
  • the electronic device 100 may include an application layer 210 and a system layer 220 .
  • Application layer 210 may include a series of application packages.
  • the application package may include graphic editing applications such as document editing applications and drawing applications.
  • Graphic editing applications can be used to edit text or images, such as generating text, modifying text styles, or drawing images.
  • the application layer 210 may include a built-in palette 211 and a built-in texture image library 212 .
  • the built-in color palette 211 may include multiple preset color features.
  • the built-in texture image library 212 may include a plurality of texture features that are preset or uploaded by the user in advance.
  • the application layer 210 may include a hand-painted brush engine module 213 .
  • the hand-painted brush engine module 213 may be used for interaction between the electronic device 100 and the user, such as triggering the electronic device 100 to acquire image features according to the method provided by the embodiment of the present application.
  • the system layer 220 may include a window management service module 221 and a layer composition module 222 .
  • the window management service module 221 can be used to manage the life cycle of each window in the electronic device 100 and detect touch events on each window and so on.
  • the touch event may include touch coordinates, pressure values, and the like.
  • the layer compositing module 222 can be used for compositing the frames obtained from multiple windows into one image.
  • the system layer 220 may also include a distributed task scheduling module 223 .
  • the distributed task scheduling module 223 can be used for the electronic device 100 to call services from other devices through the distributed data interaction channel.
  • system layer 220 may also include an image rendering module.
  • the image drawing module can be used to draw images on the display screen 140 .
  • the graphic editing function is an important function of electronic equipment. Users can edit text or images on electronic devices through the graphic editing function. In the process of graphic text editing, users usually need to personalize the text or image, such as setting the text or image to a specific color, or drawing a specific texture on a certain area of the text or image.
  • FIG. 3 is a flow chart of a method for editing images and texts provided by the embodiment of the present application.
  • the user touches the display screen 140 of the electronic device 100 .
  • the user can interact with the electronic device 100 by touching the display screen 140 of the electronic device 100 with a body or a stylus, such as selecting text or image areas that need to be colored or textured.
  • the electronic device 100 processes a touch event through the system layer 220 to obtain a touch event object.
  • the electronic device 100 can process the touch event through the system layer 220 , encapsulate the coordinates and pressure values of the touch event into a touch event object, and provide the touch event object to the application layer 210 .
  • the electronic device 100 performs corresponding logic processing based on the touch event object through the application layer 210.
  • the graphics and text editing program (such as a drawing program or a document program) of the application layer 210 in the electronic device 100 can perform logic processing inside the application program after obtaining the touch event object, such as determining that the user opens the built-in palette 211, Determine the color selected by the user in the built-in palette 211 , determine that the user opens the built-in texture image library 212 or determine the texture feature selected by the user in the built-in texture image library.
  • the electronic device 100 performs graphic and text editing operations through the system layer 220 .
  • the electronic device 100 can perform image-text editing operations through the system layer 220 , and display image-text editing results on the display screen 140 .
  • image-text editing operations through the system layer 220 , and display image-text editing results on the display screen 140 .
  • the editing operation on an image if the image is to be colored, the image can be dyed based on the color features determined from the built-in palette 211; Texture feature, draw the corresponding texture.
  • the electronic device can only provide the user with color features through the built-in palette and texture features with the built-in texture image feature library in the process of implementing the above-mentioned image-text editing method.
  • image feature libraries such as built-in color palettes and built-in texture image feature libraries are usually set in advance by developers of graphics and text editing programs, the image features included therein are quite limited and difficult to meet user needs.
  • embodiments of the present application provide a system and method for acquiring image features.
  • FIG. 4 is a block diagram of a system for acquiring image features provided by an embodiment of the present application.
  • the system may include a first device 410 , and may further include a second device 420 associated with the first device 410 and a distributed data interaction channel 430 for data interaction between the first device 410 and the second device 420 .
  • the association between the first device 410 and the second device 420 may refer to that the first device 410 and the second device 420 are or can be connected through communication.
  • the first device 410 and the second device 420 may be devices currently connected through short-range communication technology.
  • the first device 410 and the second device 420 may be devices corresponding to the same user identifier.
  • the first device 410 may be a tablet computer of user A
  • the second device 420 may be a mobile phone of user A.
  • the first device 410 may include an application layer 411 and a system layer 412 .
  • the application layer 411 may include a freehand brush engine module 413 .
  • the system layer 412 may include a window management service module 414 , a layer synthesis module 415 and a distributed task scheduling module 416 .
  • the second device 420 may include an application layer 421 and a system layer 422 .
  • the application layer 421 may include a freehand brush engine module 423 .
  • the system layer 422 may include a window management service module 424, a layer compositing module 425 and a distributed task scheduling module 426.
  • the above-mentioned hand-drawn brush engine module 413, window management service module 414, layer composition module 415 and distributed task scheduling module 416 can be respectively connected with the aforementioned hand-painted brush engine module 213 in the electronic device 100 in Fig. 2 , window management service module 221, layer composition module 222 and distributed task scheduling module 223 are similar or identical; above-mentioned hand-painted brush engine module 423, window management service module 424, layer composition module 425 and distributed task scheduling module 426, They may be similar or the same as the hand-painted brush engine module 213, the window management service module 221, the layer synthesis module 222 and the distributed task scheduling module 223 in the aforementioned electronic device 100 in FIG. 2 .
  • the hand-painted brush engine module in the first device 410 and/or the second device 420 can be omitted, and, in In the case where the first device 410 does not need to acquire image features from the second device 420, the distributed task scheduling module 416 in the first device 410 may also be omitted.
  • the application layer of the first device 410 and/or the second device 420 may also include at least one of a built-in color palette and a built-in texture image library.
  • the first device 410 can acquire the first feature, where the first feature is a feature of the first image of the target device, and the target device can be the first
  • a device 410 may also be a second device 420, and the first image may be at least a part of the image currently displayed on the target device. Since the content of the screen displayed on the target device comes from a wide range of sources, it may be the interface of an application program on the target device, or it may be the superimposition of the interfaces of multiple application programs on the target device. A frame in a video, or a list of multiple photos in an album.
  • the first image as a part of the screen, will not be limited by an application program in the first device 410 or by the first device 410 itself, and will not be related to the built-in palette or built-in texture storehouse of the graphics and text editing program.
  • the first features that may be included in the first image are extremely flexible and diverse, thus greatly improving the flexibility and diversity of acquiring image features.
  • the user may open a favorite photo on the display screen of the first device 410, so that the picture currently displayed on the display screen includes the photo, and then obtain the first image and obtain the first feature from the first image, that is,
  • the image features can be quickly obtained from the user's favorite images, which can fully meet the user's needs.
  • FIG. 5 is a flowchart of a method for acquiring image features provided by an embodiment of the present application. It should be noted that the method is not limited to the specific order in FIG. 5 and described below. It should be understood that in other embodiments, the order of some steps in the method can be exchanged according to actual needs, or some steps in the method can be exchanged according to actual needs. It can also be omitted or deleted. This method can be used in the interaction between the first device or the first device and the second device as shown in Figure 4, including the following steps:
  • the first device receives a first acquisition instruction.
  • the first obtaining instruction is used to instruct the first device to obtain image features.
  • the image feature may be a visual feature of the image, and the image feature may be used to edit the text or the image.
  • the text or image can be made to have the image feature.
  • the feature types of image features may include color types and texture types.
  • the feature types of image features may also include other feature types, such as at least one of shape type and spatial relationship type.
  • the first device may provide the user with a control for triggering the acquisition of image features through a human-computer interaction interface, and receive a first acquisition instruction submitted by the user based on the control.
  • the display screen of the first device is a touch screen, and the user can interact with the first device by clicking or sliding on the screen with a finger or a stylus.
  • the lower left corner of the first device includes an "acquire image features" button, and when the first device receives a click operation based on the button, it can be determined that the first acquisition instruction is received.
  • the first device may receive a third setting instruction submitted by the user, and the third setting instruction is used to indicate the target feature type.
  • the target feature type may include a color type or a texture type. In some other embodiments, the target feature type may also be carried in the first acquisition instruction.
  • the first device When the first device receives a click operation based on the "acquire image feature" in Figure 6, it can continue to provide the user with a secondary menu for determining the feature type, as shown in Figure 7, the secondary menu includes a variety of For the selected feature type, when the first device receives the user's click operation based on any feature type, it determines that the feature type is the target feature type selected by the user.
  • the first device in order to enable the user to obtain image features from other electronic devices and apply them to the first device, further improving the range and flexibility of image feature acquisition, the first device can receive the first setting instruction or the second setting instruction submitted by the user.
  • Setting instructions wherein the first setting instruction may carry the device identifier of the first device, which is used to indicate that the target device for acquiring image features is the first device, and the second setting instruction may carry the device identifier of the second device, which is used to indicate The target device for acquiring image features is the second device.
  • the device identifier of the first device or the device identifier of the second device may also be carried in the first acquisition instruction.
  • the first device When the first device receives the user's click operation based on "acquire image features" as shown in Figure 6, or determines the target feature type selected by the user based on the secondary menu as shown in Figure 7, it may continue to display the image shown in Figure 8.
  • the device selection interface shown in the figure the device selection interface includes at least one device identifier, and when a user's click operation is received based on any device identifier, it can be determined that the electronic device corresponding to the device identifier is the target device.
  • the first device when the first device receives a click operation based on the device identifier of the first device, it can be determined that the first setting instruction is received and the first device is the target device; when the first device receives the click operation based on the device identifier of the second device , it can be determined that the second setting instruction is received, and the second device is the target device.
  • the second device may be a device associated with the first device.
  • the first device may receive other more information for instructing image acquisition.
  • Indication information in a characteristic manner, these indication information may be indicated by separate setting instructions, or all of them may be carried in the first acquisition instruction.
  • the embodiment of the present application does not specifically limit the manner in which the first device receives the indication information used to indicate the manner of acquiring the image feature.
  • the first device judges whether to acquire image features across devices. If yes, execute S506, otherwise execute S503.
  • the first device In order to determine whether to obtain image features from the local end or other devices, the first device adopts a corresponding acquisition method. The first device can determine whether to obtain image features across devices.
  • the first device may determine that image features do not need to be acquired across devices.
  • the first device receives that the device identifier submitted by the user is not the device identifier of the first device, it determines that image features need to be extracted across devices from the second device corresponding to the received device identifier.
  • the first device may determine whether the device identifier is carried in the first setting instruction, the second setting instruction or the first obtaining instruction. If the first setting instruction or the first acquisition instruction does not carry any device identifier or the carried device identifier is the device identifier of the first device, it may be determined that there is no need to extract image features across devices. If the second setting instruction or the first acquisition instruction carries a device identifier, and the device identifier is not the device identifier of the first device, it is necessary to extract image features across devices from the second device corresponding to the received device identifier.
  • the first device may also be configured to obtain image features only from its own end or only from the second device, so S502 may not be executed, that is, S502 is an optional step.
  • the first device creates a first window.
  • each window may belong to a different application program. is the window of the painting program, and the right side may be the window of the photo album, so in order to prevent the first device from confusing the operation of obtaining the first image with other operations (such as operations for the photo album), and improve the reliability of obtaining the first image, the first A device can create a first window through the aforementioned window management service module.
  • the size of the first window may be the same as the size of the display screen of the first device, and the first window is a transparent window located on the upper layer of other windows displayed on the display screen, that is, the first window is located on the upper layer of all application programs of the first device.
  • the global transparent window may be the same as the size of the display screen of the first device, and the first window is a transparent window located on the upper layer of other windows displayed on the display screen, that is, the first window is located on the upper layer of all application programs of the first device.
  • the transparency of the first window may be obtained by the first device receiving a submission from a relevant technical person in advance, or may be obtained from a user's submission before creating the first window.
  • the transparency of the first window may be 100%.
  • the transparency of the first window may also be other values, and this embodiment of the present application does not specifically limit the transparency of the first window.
  • a schematic diagram of a display scene may be as shown in FIG. 9 .
  • the scene includes a first window 901 on the top layer, the first window is a global transparent window with a transparency of 100%, and the lower layer of the first window is the original display interface of the first device, including the second window 902 and the third window 903, wherein the second window 902 is the window of the drawing program as shown in FIGS. 6-8 , and the third window is the window of the photo album as shown in FIGS. 6-8 .
  • the first device may also acquire the first image in other ways, therefore S503 may not be executed, that is, S503 is an optional step.
  • the first device acquires a first image.
  • the first image may be at least a part of the image currently displayed on the display screen of the first device.
  • the first device may acquire the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation. In some embodiments, when the first device creates the first window, the first device may receive the first screenshot operation based on the first window.
  • the user can set the area range of the first image to be acquired through the first screenshot operation.
  • the first device may determine the first closed area on the display screen of the first device based on the first screenshot operation, and the image in the first closed area is the first image that the user needs to acquire.
  • the first screenshot operation may be used to directly determine the first closed area.
  • the first screenshot operation may include a sliding operation.
  • the first device may determine the closed area formed by the sliding track of the sliding operation as the first closed area.
  • the enclosed area may be the largest enclosed area or the smallest enclosed area formed by the sliding trajectory. That is, the user can flexibly and accurately acquire the first image of any size and any shape by sliding on the display screen.
  • the photo on the upper right of the display screen of the first device includes river banks on both sides and people jumping above, and the user draws an irregular first Closed area 1001, the first closed area 1001 includes the river bank on the right side.
  • the ends of the slide tracks may be connected to obtain a closed area.
  • the first screenshot operation may be used to determine the first position of the first closed area on the display screen, and the preset first border may be used to determine the size and shape of the first closed area.
  • the first device may determine the first position based on the first screenshot operation, and determine the area within the first border at the first position as the first closed area. Since the user does not need to draw the first closed area, the difficulty of acquiring the first image can be reduced.
  • the first frame is a circular frame with a diameter of 3 cm.
  • the photo on the lower right of the display screen of the first device includes a half-body photo of a person.
  • the clicked position or the end position of the sliding track is the first position.
  • a circular frame with a diameter of 3 cm is generated at a position, and the area within the running frame is the first closed area 1001, which includes a person's head portrait.
  • the first frame (including size and shape) may be determined by setting in advance.
  • the first device may provide a plurality of different frames to the user in advance, and when a user's selection operation is received based on any frame, the frame is determined as the first frame.
  • the embodiment of the present application does not specifically limit the size, shape, and setting manner of the first frame.
  • the first screenshot operation may also include other operations, as long as the first closed area can be determined, and the embodiment of the present application does not specifically limit the operation mode of the first screenshot operation .
  • the first device may acquire the first image based on the first closed area.
  • the first device may capture the screen currently displayed on the display screen of the first device as the second image, and crop the second image based on the first closed area to obtain the first image. That is, the first device may first take a screenshot of the entire screen of the display screen of the first device, and then cut out the first image from the second image obtained by the screenshot according to the first enclosed area.
  • the first device may capture the first image in the first enclosed area from the screen currently displayed on the display screen of the first device.
  • the first device can determine the picture of at least one window that matches the first closed area based on the positional relationship between each window and the first closed area through the layer synthesis module, and determine the picture of at least one window that matches the first closed area, and according to the upper and lower levels between the at least one window relationship, synthesizing pictures of at least one window into the first image.
  • the determination of the first image according to the first closed area may include less image data than the second image, which can make the subsequent analysis of the first feature extraction data less and more accurate, and can improve the acquisition of the first feature. Feature efficiency and accuracy.
  • the second image may also be acquired for subsequent acquisition of the first feature.
  • the first device may close the first window after acquiring the first image, so that the user can continue to interact with other windows .
  • the first device acquires the first feature. Afterwards, the first device may execute S511.
  • the first device may analyze and process the first image, so as to extract the first feature. Because the first image is at least part of the image currently displayed on the display screen of the first device, and the screen will not be restricted by a certain application program, correspondingly, the first image will not be restricted by a certain application program in the first device.
  • the limitation of the application program makes it possible to obtain the first feature from sources other than the preset image feature library, such as the area outside the interface of the graphic editing application program, etc., thus improving the flexibility and variety of image feature acquisition , which can fully meet the needs of users.
  • the operation is simpler.
  • the first device acquires the target feature type specified by the user through the third setting instruction or the first acquisition instruction, then the first device can process the first image based on the target feature type, thereby obtaining The first feature of this target feature type.
  • the first device may perform type analysis on the color of the first image, and the obtained first feature is a feature of the color type, such as red green blue (red green blue) , RGB) value;
  • the feature type carried in the first acquisition instruction is a texture type, then the first device can perform type analysis on the texture of the first image, and the obtained first feature is a feature of the texture type.
  • the first image may be processed based on at least one feature type, so as to obtain at least one type of first feature.
  • the first device can analyze the first image by means of color histogram, color set, color moment, color aggregation vector or color correlation graph; for the texture type, the first device can use statistical methods, Geometry method, model method or signal processing method, etc., to analyze the first image, or to blur, denoise or add salt value to the first image; for shape features, the first device can use boundary feature method, fuzzy The first image is analyzed by means of Lie shape description method, geometric parameter method or shape invariant moment method; for the spatial relationship type, the first device can divide the first image into multiple image blocks, and then extract each image Block characteristics and build an index.
  • the first device may also process the first image in other ways to obtain the first feature.
  • the embodiment of the present application does not specifically limit the way to obtain the first feature from the first image.
  • the first device sends a first acquisition request to the second device.
  • the first device may send a first acquisition request corresponding to the first acquisition specification to the second device through the distributed data interaction channel, thereby requesting the second device to acquire the image feature.
  • the target feature type may be carried in the first acquisition request.
  • the first device can establish a distributed data interaction channel with the second device, and perform data interaction with the second device through the distributed data interaction channel, including sending a first acquisition request to the second device and subsequently receiving The data fed back by the second device.
  • the second device when the second device receives the first acquisition request sent by the second device, it may display first notification information, where the first notification information is used to notify that the first acquisition request of the first device is about to be responded.
  • the second device when it receives the first acquisition request, it may display an interface as shown in FIG. and reject button. If the user's click operation is received based on the accept button, the steps described below may proceed. If a user's click operation is received based on the reject button, subsequent operations may be stopped.
  • the first device may also send a second acquisition request to the second device, and the second acquisition requests the second device to acquire an image for acquiring image features.
  • the second device creates the first window.
  • the manner in which the second device creates the first window may be the same as the manner in which the first device creates the first window in S503 , which will not be repeated here.
  • the second device acquires the first image.
  • the manner in which the second device acquires the first image may be the same as the manner in which the first device acquires the first image in S504, which will not be repeated here.
  • the second device acquires the first feature.
  • the manner in which the second device acquires the first feature may be the same as the manner in which the first device acquires the first feature in S505 , which will not be repeated here.
  • the second device sends the first feature to the first device.
  • the first device may execute S511.
  • the second device may send the first characteristic to the first device based on the aforementioned distributed data interaction channel.
  • S509 may not be executed, and the first image is fed back to the first device in S510.
  • the first device may execute the aforementioned S505 when receiving the first image, so as to extract the first feature.
  • the first device can obtain the first feature from the second device, because the first image is at least part of the image currently displayed on the display screen of the second device, and the image will not be affected by an application Restricted by the program, correspondingly, the first image will not be restricted by the first device itself, so that the first device can obtain the first feature from a second device other than the first device, further improving the accuracy of obtaining image features.
  • Flexibility and diversity can fully meet user needs. For example, a user can apply the color or texture of a photo in the camera roll of the mobile phone to the drawing program of the tablet computer.
  • the first device performs an image-text editing operation based on the first feature.
  • the first device may perform a graphic editing operation based on the first feature in the graphic editing program, so as to apply the first feature to a new text or image, so that the operated object has the first feature.
  • the first device may bind the first feature to the stylus. If the first device detects the drawing operation of the stylus, the image feature of the text or image drawn by the drawing operation is set as the first feature.
  • the first device can bind the RGB value to the stylus, and when the user draws with the stylus, the color of the drawn track is the color indicated by the RGB value .
  • the first feature is a texture feature, and the first device can also bind the texture feature to the stylus. When the user draws with the stylus, the texture feature of the drawn track is the texture feature of the stylus. The bound texture feature.
  • the first device or the second device may not obtain the first feature from the first image, but directly copy the first image to the graphic editing program.
  • the first device may not execute S511 to immediately apply the first feature, that is, S511 is an optional step.
  • the first device may add the obtained first feature to the image feature library such as the built-in palette or the built-in texture image library, so that the user can directly select from the built-in palette or the built-in texture image library next time.
  • An image feature library such as a texture image library to obtain the first feature.
  • the first device may also provide the second feature to the third device associated with the first device in a manner similar to that of the second device.
  • the first device may receive a third acquisition request from the third device, where the third acquisition request is used to request to acquire image features from the first device.
  • the first device acquires a third image, and the third image may be at least part of the image currently displayed on the display screen of the first device, and the first device extracts the second feature from the third image, and feeds back to the third device Second feature.
  • the first device may receive a fourth acquisition request from the third device, where the fourth acquisition request is used to request to acquire an image for acquiring image features from the first device.
  • the first device acquires the third image, and feeds back the third image to the third device.
  • the third device extracts the second feature from the third image.
  • the first device may acquire the first feature, where the first feature is a feature of the first image of the target device.
  • the target device may be the first device, or may be a second device associated with the first device, and the first image may be at least part of an image currently displayed on a display screen of the target device. Since the content of this screen comes from a wide range of sources, it may be the interface of an application program on the target device, or it may be the superposition of multiple application program interfaces on the target device.
  • the screen may be a frame of a video being played , it may also be that the album includes a list of multiple photos, therefore, the first image will not be limited by a certain application program or the first device itself, and the first features that the first image may include are also extremely flexible and diverse, Therefore, the flexibility and diversity of acquiring image features are greatly improved, and the needs of users can be fully met.
  • FIG. 13 is a schematic structural diagram of an electronic device 1300 provided by the embodiment of the present application.
  • the electronic device provided by this embodiment includes: a memory 1310 and a processor 1320, the memory 1310 is used to store computer programs; the processor 1320 uses The methods described in the above method embodiments are executed when the computer program is called.
  • the electronic device provided in this embodiment can execute the foregoing method embodiment, and its implementation principle and technical effect are similar, and details are not repeated here.
  • an embodiment of the present application also provides a chip system.
  • the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory, so as to implement the methods described in the above method embodiments.
  • the chip system may be a single chip, or a chip module composed of multiple chips.
  • the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method described in the foregoing method embodiment is implemented.
  • An embodiment of the present application further provides a computer program product, which, when the computer program product is run on an electronic device, enables the electronic device to implement the method described in the foregoing method embodiments.
  • the above integrated units are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, all or part of the procedures in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps in the above-mentioned various method embodiments can be realized.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable storage medium may at least include: any entity or device capable of carrying computer program codes to the photographing device/terminal device, recording medium, computer memory, read-only memory (read-only memory, ROM), random access Memory (random access memory, RAM), electrical carrier signals, telecommunication signals, and software distribution media.
  • computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.
  • the disclosed device/device and method can be implemented in other ways.
  • the device/device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the term “if” may be construed, depending on the context, as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrase “if determined” or “if [the described condition or event] is detected” may be construed, depending on the context, to mean “once determined” or “in response to the determination” or “once detected [the described condition or event] ]” or “in response to detection of [described condition or event]”.
  • references to "one embodiment” or “some embodiments” or the like in the specification of the present application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供一种获取图像特征的方法及电子设备,涉及终端技术领域,其中,方法包括第一设备接收第一获取指令,第一获取指令用于指示第一设备获取图像特征。第一设备响应第一获取指令,获取第一特征,第一特征为目标设备的第一图像的特征,目标设备为第一设备或与第一设备关联的第二设备,第一图像为目标设备的显示屏当前显示的画面中的至少部分图像。本申请提供的技术方案能够提高获取图像特征的灵活性和多样性。

Description

获取图像特征的方法及电子设备
本申请要求于2021年6月25日提交国家知识产权局、申请号为202110713551.2、申请名称为“获取图像特征的方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,尤其涉及一种获取图像特征的方法及电子设备。
背景技术
随着电子设备技术的不断发展,电子设备的各项功能正在逐渐完善。其中,手写和绘画等图文编辑功能使得用户能够在电子设备的屏幕中手写文字或者绘制图像,便于用户记录信息或进行艺术创作。用户在使用上述图文编辑功能的过程中,可能需要特定色彩的笔刷,或者需要绘制特定的纹理,相应的,电子设备需要向用户提供这些色彩和纹理等图像特征。
现有技术中,电子设备可以向用户提供预设的图像特征库,该图像特征库中可以包括多种预设的图像特征。当电子设备基于任一种图像特征接收到用户的选择操作时,即确定该图像特征为用户所选择的图像特征,之后可以基于用户所选择的图像特征进行图文编辑等其他后续操作。
但预设的图像特征库中所包括的图像特征通常是非常有限的,以色彩特征为例,电子设备可能只会向用户提供几种常用的色彩特征,而实际上色彩特征的变化可能是无穷无尽的,因此显而易见地,现有技术中这种提供图像特征的方式局限性较大,难以满足用户需求。
发明内容
有鉴于此,本申请提供一种获取图像特征的方法及电子设备,能够提高获取图像特征的灵活性和多样性。
为了实现上述目的,第一方面,本申请实施例提供一种获取图像特征的方法,包括:
第一设备接收第一获取指令,所述第一获取指令用于指示所述第一设备获取图像特征;
所述第一设备响应所述第一获取指令,获取第一特征,所述第一特征为目标设备的第一图像的特征,所述目标设备为所述第一设备或与所述第一设备关联的第二设备,所述第一图像为所述目标设备的显示屏当前显示的画面中的至少部分图像。
其中,图像特征为图像在视觉上的特点,该图像特征可以用于对文本或图像进行编辑。当基于该图像特征对文本或图像进行编辑时,可以使得该文本或图像具有该图像特征。
需要说明的是,第一设备和第二设备关联,可以指第一设备与第二设备正在或能 够通过通信连接。在一些实施例中,第一设备与第二设备可以是当前通过近距离通信技术连接的设备。在另一些实施例中,第一设备和第二设备可以是对应同一用户标识的设备。例如,第一设备可以为用户A的平板电脑,第二设备可以为用户A的手机。
在本申请实施例中,若第一设备接收到用于指示获取图像特征的第一获取指令,则第一设备可以获取第一特征,其中,第一特征为目标设备的第一图像的特征,目标设备可以为第一设备,也可以是与第一设备关联的第二设备,第一图像可以为目标设备的显示屏当前显示的画面中的至少部分图像。由于该画面的内容来源非常广泛,可能是目标设备中某个应用程序的界面,也可能是目标设备中多个应用程序的界面的叠加,比如该画面可能是正在播放的视频中的一帧画面,也可能是相册中包括多个相片的列表,因此,第一图像不会受某个应用程序或第一设备本身的限制,第一图像可能包括的第一特征也是极其灵活且种类繁多的,从而极大地提高了获取图像特征的灵活性和多样性,能够充分满足用户需求。
可选地,第一特征可以为色彩类型的特征或纹理类型的特征。
可选地,所述目标设备为所述第一设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:
所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像;
所述第一设备从所述第一图像提取所述第一特征。
由于第一图像是第一设备的显示屏当前显示的画面中的至少部分图像,而该画面不会受到某个应用程序的限制,相应的,第一图像也不会受第一设备中某个应用程序的限制,使得能够从预设的图像特征库之外的来源获取到第一特征,比如图文编辑应用程序的界面之外的区域等,因此提高了获取到图像特征的灵活性和多样性,能够充分满足用户需求。另外,与用户从第一设备之外向第一设备上传图像特征的方式相比,操作更加简单。
可选地,在所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像之前,所述方法还包括:
所述第一设备基于所述第一获取指令,创建第一窗口,所述第一窗口的尺寸与所述第一设备的显示屏的尺寸相同,且所述第一窗口为位于所述显示屏所显示的其他窗口上层的透明窗口;
所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像,包括:
所述第一设备若基于所述第一窗口接收到所述第一截图操作,则从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像;
在所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像之后,所述方法还包括:
所述第一设备关闭所述第一窗口。
由于第一设备显示屏中可能包括第二窗口和第三窗口等多个窗口,每个窗口可能归属于不同的应用程序,比如该显示屏的左侧可以为绘画程序的窗口,右侧可以为相册的窗口,所以为了避免第一设备将获取第一图像的操作与其他操作(比如针对相册 的操作)相混淆,提高获取第一图像的可靠性,第一设备可以创建第一窗口。
其中,上述包括第二窗口和第三窗口等多个窗口的界面,共同构成第二设备的显示屏当前显示的画面。
需要说明的是,第一窗口的透明度可以是第一设备预先接收相关技术人员提交得到的,也可以是在创建第一窗口之前,接收用户提交得到的。
可选地,第一窗口的透明度可以为100%。当然,在实际应用中,第一窗口的透明度还可以为其他数值,本申请实施例对第一窗口的透明度不做具体限定。
可选地,所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像,包括:
所述第一设备基于所述第一截图操作,在所述第一设备的显示屏确定第一封闭区域;
所述第一设备基于所述第一封闭区域获取所述第一图像。
可选地,所述第一设备基于所述第一截图操作,在所述第一设备的显示屏确定第一封闭区域,包括:
所述第一设备基于所述第一截图操作,确定第一位置,将处于所述第一位置处的第一边框内的区域确定为所述第一封闭区域,所述第一边框为预设的边框;或,
所述第一截图操作为滑动操作,所述第一设备将由所述滑动操作的滑动轨迹构成的封闭区域,确定为所述第一封闭区域。
用户可以通过显示屏上滑动,灵活准确地获取任意大小任意形状的第一图像。当然,也可以仅指定第一封闭区域的第一位置,再结合预设的形状和大小的第一边框,快速确定第一封闭区域和第一图像,也能够降低获取到第一图像的难度。
在一些实施例中,第一封闭区域可以是由该滑动轨迹构成的最大封闭区域或最小封闭区域。
需要说明的是,若滑动操作的滑动轨迹并未构成封闭区域,则可以将该滑动轨迹的首尾相连接,从而得到一个封闭区域。
还需要说明的是,第一边框(包括大小和形状)可以通过事先设置确定。在一些实施例中,第一设备可以事先向用户提供多个不同的边框,并在基于任一边框接收到用户的选择操作时,将该边框确定为第一边框。本申请实施例对此第一边框的大小、形状以及设置方式均不作具体限定。
可选地,所述第一设备基于所述第一封闭区域获取所述第一图像,包括:
所述第一设备从所述第一设备的显示屏当前显示的所述画面截取所述第一封闭区域中的所述第一图像;或,
所述第一设备截取所述第一设备的显示屏当前显示的所述画面作为第二图像,并基于所述第一封闭区域对所述第二图像裁剪得到所述第一图像。
其中,根据第一封闭区域确定第一图像,可以比第二图像包括更少的图像数据,能够使得后续提取第一特征所需要分析的数据更少更准确,能够提高获取到第一特征效率和准确性。当然,在实际应用中,也可以获取第二图像来用于后续获取第一特征。
可选地,所述目标设备为所述第二设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:
所述第一设备向所述第二设备发送第一获取请求,所述第一获取请求与所述第一获取指令对应,所述第一获取请求用于请求从所述第二设备获取图像特征;
所述第一设备接收所述第二设备反馈的所述第一特征。
由于第一图像是第二设备的显示屏当前显示的画面中的至少部分图像,而该画面不会受到某个应用程序的限制,相应的,第一图像也不会受第一设备本身的限制,使得第一设备能够从第一设备之外的第二设备获取到第一特征,进一步提高了获取到图像特征的灵活性和多样性,能够充分满足用户需求。例如,用户可以将手机相册中某张照片的色彩或纹理,应用至平板电脑的绘图程序中。
其中,第一设备可以通过分布式数据交互通道,与第二设备进行通信。
在一些实施例中,第一获取请求还可以携带目标特征类型。
可选地,所述目标设备为所述第二设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:
所述第一设备向所述第二设备发送第二获取请求,所述第二获取请求与所述第一获取指令对应,所述第二获取请求用于请求从所述第二设备获取图像;
所述第一设备接收所述第二设备反馈的所述第一图像;
所述第一设备从所述第一图像提取所述第一特征。
可选地,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:
所述第一设备接收第一设置指令,所述第一设置指令用于指示所述目标设备为所述第一设备;或者,
所述第一设备接收第二设置指令,所述第二设置指令用于指示所述目标设备为所述第二设备。
需要说明的是,第一设备也可以通过其他方式确定目标设备为第一设备或第二设备,或者,第一设备也可以被配置为只从第一设备或第二设备获取图像特征,从而不必判断目标设备是第一设备或第二设备。
可选地,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:
所述第一设备接收第三设置指令,所述第三设置指令用于指示获取图像特征的目标特征类型;
所述第一设备响应所述第一获取指令,获取第一特征,包括:
所述第一设备基于所述目标特征类型,获取所述第一特征。
可选地,所述目标特征类型包括色彩类型或纹理类型。
也即是,若用户指定了目标特征类型,第一设备可以准确地从第一图像中,提取得到该目标特征类型的第一特征。而在另一些实施例中,若用户未指定目标特征类型,则第一设备可以基于至少一种特征类型,对第一图像进行处理,从而得到至少一种类型的第一特征。
可选地,所述方法还包括:
所述第一设备接收关联的第三设备的第三获取请求,所述第三获取请求用于请求从所述第一设备获取图像特征;
所述第一设备获取第三图像,所述第三图像为所述第一设备的显示屏当前显示的画面中的至少部分图像;
所述第一设备从所述第三图像提取第二特征;
所述第一设备向所述第三设备反馈所述第二特征。
第一设备也可以作为图像特征的提供方,从而向关联的第三设备提供第二特征。且在一些实施例中,第一设备可以向第三设备发送第三图像,相应的,第三设备对第三图像进行处理,得到第二特征。
可选地,第一设备基于第一特征进行图文编辑操作。
图文编辑操作可以。第一设备可以在图文编辑程序中基于第一特征进行图文编辑操作,从而将第一特征应用至新的文本或图像,使得操作后的文本或图像具有第一特征。
在一些实施例中,第一设备可以将获取到的第一特征进行添加到内置调色板或内置纹理图像库等图像特征库中,使得用户下一次可以直接从内置调色板或内置纹理图像库等图像特征库,获取第一特征。
在一些实施例中,第一设备或第二设备也可以不从第一图像获取第一特征,而是直接将第一图像复制至图文编辑程序。
第二方面,本申请实施例提供一种获取图像特征的方法,包括:
第二设备接收第一设备发送的第一获取请求,所述第一获取请求用于请求从所述第二设备获取图像特征;
所述第二设备获取第一图像,所述第一图像为所述第二设备的显示屏当前显示的画面中的至少部分图像;
所述第二设备从所述第一图像提取所述第一特征;
所述第二设备向所述第一设备反馈所述第一特征。
可选地,所述第二设备获取第一图像,包括:
所述第二设备基于第一截图操作,从所述第二设备的显示屏当前显示的所述画面中获取所述第一图像。
可选地,在所述第二设备基于第一截图操作,从所述第二设备的显示屏当前显示的所述画面中获取所述第一图像之前,所述方法还包括:
所述第二设备基于所述第一获取请求,创建第一窗口,所述第一窗口的尺寸与所述第二设备的显示屏的尺寸相同,且所述第一窗口为位于所述显示屏所显示的其他窗口上层的透明窗口;
所述第二设备基于第一截图操作,从所述第二设备的显示屏当前显示的所述画面中获取所述第一图像,包括:
所述第二设备若基于所述第一窗口接收到所述第一截图操作,则从所述第二设备的显示屏当前显示的所述画面中获取所述第一图像;
在所述第二设备基于第一截图操作,从所述第二设备的显示屏当前显示的所述画面中获取所述第一图像之后,所述方法还包括:
所述第二设备关闭所述第一窗口。
可选地,所述第二设备基于第一截图操作,从所述第二设备的显示屏当前显示的 所述画面中获取所述第一图像,包括:
所述第二设备基于所述第一截图操作,在所述第二设备的显示屏确定第一封闭区域;
所述第二设备基于所述第一封闭区域获取所述第一图像。
可选地,所述第二设备基于所述第一截图操作,在所述第二设备的显示屏确定第一封闭区域,包括:
所述第二设备基于所述第一截图操作,确定第一位置,将处于所述第一位置处的第一边框内的区域确定为所述第一封闭区域,所述第一边框为预设的边框;或,
所述第一截图操作为滑动操作,所述第二设备将由所述滑动操作的滑动轨迹构成的封闭区域,确定为所述第一封闭区域。
可选地,所述第二设备基于所述第一封闭区域获取所述第一图像,包括:
所述第二设备从所述第二设备的显示屏当前显示的所述画面截取所述第一封闭区域中的所述第一图像;或,
所述第二设备截取所述第二设备的显示屏当前显示的所述画面作为第二图像,并基于所述第一封闭区域对所述第二图像裁剪得到所述第一图像。
在一些实施例中,第二设备也可以在接收到第一设备发送的第二获取请求时,获取并向第一设备反馈第一图像,第一设备从第一图像提取得到第一特征。
第三方面,本申请实施例提供一种获取图像特征的装置,所述装置可以设置在电子设备中,且所述装置可以用于执行如第一方面任一项和/或第二方面任一项所述的方法。
可选地,该装置中可以包括手绘笔刷引擎模块。手绘笔刷引擎模块可以用于电子设备与用户之间的交互,比如触发电子设备按照本申请实施例所提供的方法来获取图像特征。
可选地,该装置中可以包括窗口管理服务模块。窗口管理服务模块可以用于管理电子设备中各窗口的生命周期以及检测针对各窗口的触摸事件等等。比如,电子设备可以通过窗口管理服务模块创建和关闭第一窗口。
可选地,该装置中可以包括图层合成模块。图层合成模块可以用于将获取到多个窗口的画面合成为一幅图像,因而可以用于获取第一图像或第二图像。
可选地,该装置中可以包括分布式任务调度模块。分布式任务调度模块可以用于电子设备通过分布式数据交互通道,从其他设备调用服务。
第四方面,本申请实施例提供一种电子设备,包括:存储器和处理器,存储器用于存储计算机程序;处理器用于在调用计算机程序时执行上述第一方面中任一项和/或第二方面任一项所述的方法。
第五方面,本申请实施例提供一种芯片系统,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述第一方面中任一项和/或第二方面任一项所述的方法。
其中,所述芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。
第六方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述第一方面中任一项和/或第二方面任一项所述的 方法。
第七方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述第一方面中任一项和/或第二方面任一项所述的方法。
可以理解的是,上述第二方面至第七方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
图1为本申请实施例所提供的一种电子设备的结构示意图;
图2为本申请实施例所提供一种电子设备的软件结构框图;
图3为本申请实施例提供的一种图文编辑的方法的流程图;
图4为本申请实施例所提供的一种获取图像特征的系统的框图
图5为本申请实施例所提供的一种获取图像特征的方法的流程图;
图6为本申请实施例所提供的一种电子设备的显示界面的示意图;
图7为本申请实施例所提供的另一种电子设备的显示界面的示意图;
图8为本申请实施例所提供的另一种电子设备的显示界面的示意图;
图9为本申请实施例所提供的一种显示场景的示意图;
图10为本申请实施例所提供的一种第一封闭区域的示意图;
图11为本申请实施例所提供的另一种第一封闭区域的示意图;
图12为本申请实施例所提供的另一种电子设备的显示界面的示意图;
图13为本申请实施例所提供的一种电子设备的结构示意图。
具体实施方式
本申请实施例提供的获取图像特征的方法可以应用于手机、平板电脑、可穿戴设备、车载设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,本申请实施例对电子设备的具体类型不作任何限制。
请参照图1,图1是本申请实施例提供的一例电子设备100的结构示意图。电子设备100可以包括处理器110、存储器120、通信模块130和显示屏140等。
其中,处理器110可以包括一个或多个处理单元,存储器120用于存储程序代码和数据。在本申请实施例中,处理器110可执行存储器120存储的计算机执行指令,用于对电子设备100的动作进行控制管理。
通信模块130可以用于电子设备100的各个内部模块之间的通信、或者电子设备100和其他外部电子设备之间的通信等。示例性的,如果电子设备100通过有线连接的方式和其他电子设备通信,通信模块130可以包括接口等,例如USB接口,USB接口可以是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
或者,通信模块130可以包括音频器件、射频电路、蓝牙芯片、无线保真(wireless fidelity,Wi-Fi)芯片、近距离无线通讯技术(near-field communication,NFC)模块等, 可以通过多种不同的方式实现电子设备100与其他电子设备之间的交互。
显示屏140可以显示人机交互界面中的图像或视频等。
可选地,电子设备100还可以包括压力传感器150用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器150可以设置于显示屏。压力传感器150的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏,电子设备100根据压力传感器150检测所述触摸操作强度。电子设备100也可以根据压力传感器150的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
可选地,电子设备100还可以包括外设设备160,例如鼠标、键盘、扬声器、麦克风、触控笔等。
应理解,除了图1中列举的各种部件或者模块之外,本申请实施例对电子设备100的结构不做具体限定。在本申请另一些实施例中,电子设备100还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。请参照图2,图2是本申请实施例的电子设备100的软件结构框图。电子设备100可以包括应用层210和系统层220。
应用层210可以包括一系列应用程序包。在一些实施例中,应用程序包可以包括文档编辑应用、绘图应用等图文编辑应用。图文编辑应用可以用于对文本或图像进行编辑,比如生成文字、修改文字样式或绘制图像等。
在一些实施例中,应用层210中可以包括内置调色板211和内置纹理图像库212。内置调色板211中可以包括多个预置的色彩特征。内置纹理图像库212中可以包括多个预置的或用户事先上传的纹理特征。
在一些实施例中,应用层210中可以包括手绘笔刷引擎模块213。手绘笔刷引擎模块213可以用于电子设备100与用户之间的交互,比如触发电子设备100按照本申请实施例所提供的方法来获取图像特征。
系统层220可以包括窗口管理服务模块221和图层合成模块222。
窗口管理服务模块221可以用于管理电子设备100中各窗口的生命周期以及检测针对各窗口的触摸事件等等。其中,触摸事件可以包括触摸的坐标以及压力值等等。
图层合成模块222可以用于将获取到多个窗口的画面合成为一幅图像。
在一些实施例中,系统层220还可以包括分布式任务调度模块223。分布式任务调度模块223可以用于电子设备100通过分布式数据交互通道,从其他设备调用服务。
在一些实施例中,系统层220还可以包括图像绘制模块。图像绘制模块可以用于在显示屏140绘制图像。
图文编辑功能是电子设备的一种重要的功能。用户可以通过图文编辑功能,在电子设备上编辑文本或图像。在进行图文编辑的过程中,用户通常需要对文本或图像进行个性化处理,比如将文本或图像设定为特定颜色,或者在文本或图像的某个区域绘制特定纹理。
请参照图3,为本申请实施例所提供的一种图文编辑的方法的流程图。
S301,用户触摸电子设备100的显示屏140。
用户可以通过肢体或者触控笔触摸电子设备100的显示屏140,从而与电子设备100进行交互,比如选择需要染色或绘制纹理的文字或图像区域。
S302,电子设备100通过系统层220处理触摸事件,得到触摸事件对象。
电子设备100可以通过系统层220处理触摸事件,并将触摸事件的坐标和压力值封装为触摸事件对象,并向应用层210提供该触摸事件对象。
S303,电子设备100通过应用层210基于触摸事件对象,进行相应的逻辑处理。
电子设备100中的应用层210的图文编辑程序(比如绘图程序或文档程序),可以在获取到触摸事件对象后,进行该应用程序内部的逻辑处理,比如确定用户打开内置调色板211、确定用户在内置调色板211所选择的颜色、确定用户打开内置纹理图像库212或确定用户在内置纹理图像库所选择的纹理特征。
S304,电子设备100通过系统层220进行图文编辑操作。
电子设备100可以通过系统层220进行图文编辑操作,并在显示屏140显示图文编辑结果。以对图像进行编辑操作为例,若是对图像进行着色,可以基于从内置调色板211确定的色彩特征,对该图像进行染色处理;若是进行纹理绘制,可以基于从内置纹理图像库212确定的纹理特征,绘制相应的纹理。
可以看出,电子设备在实施上述图文编辑的方法的过程中,只能够通过内置调色板向用户提供色彩特征、通过内置纹理图像特征库向用户提供纹理特征。由于内置调色板和内置纹理图像特征库等图像特征库通常都是由图文编辑程序的开发人员事先设置的,其中所包括的图像特征相当有限,难以满足用户需求。
为解决上述技术问题,本申请实施例提供了一种获取图像特征的系统和方法。
请参照图4,为本申请实施例所提供的一种获取图像特征的系统的框图。该系统可以包括第一设备410,还可以包括与第一设备410关联的第二设备420以及用于第一设备410与第二设备420数据交互的分布式数据交互通道430。其中,第一设备410和第二设备420关联,可以指第一设备410与第二设备420正在或能够通过通信连接。在一些实施例中,第一设备410与第二设备420可以是当前通过近距离通信技术连接的设备。在另一些实施例中,第一设备410和第二设备420可以是对应同一用户标识的设备。例如,第一设备410可以为用户A的平板电脑,第二设备420可以为用户A的手机。
第一设备410中可以包括应用层411和系统层412。应用层411可以包括手绘笔刷引擎模块413。系统层412可以包括窗口管理服务模块414、图层合成模块415和分布式任务调度模块416。
第二设备420中可以包括应用层421和系统层422。应用层421可以包括手绘笔刷引擎模块423。系统层422可以包括窗口管理服务模块424、图层合成模块425和分 布式任务调度模块426。
需要说明的是,上述手绘笔刷引擎模块413、窗口管理服务模块414、图层合成模块415和分布式任务调度模块416,可以分别与前述中图2电子设备100中的手绘笔刷引擎模块213、窗口管理服务模块221、图层合成模块222和分布式任务调度模块223相似或相同;上述手绘笔刷引擎模块423、窗口管理服务模块424、图层合成模块425和分布式任务调度模块426,可以分别与前述中图2电子设备100中的手绘笔刷引擎模块213、窗口管理服务模块221、图层合成模块222和分布式任务调度模块223相似或相同。
还需要说明的是,在第一设备可以通过触控之外的方式对用户进行交互的情况下,第一设备410和/或第二设备420中的手绘笔刷引擎模块可以省略,以及,在第一设备410不需要从第二设备420获取图像特征的情况下,第一设备410中的分布式任务调度模块416也可以省略。
还需要说明的是,第一设备410和/或第二设备420的应用层中也可以包括内置调色板和内置纹理图像库中的至少一个。
若第一设备410接收到用于指示获取图像特征的第一获取指令,则第一设备410可以获取第一特征,其中,第一特征为目标设备的第一图像的特征,目标设备可以为第一设备410,也可以是第二设备420,第一图像可以为目标设备当前显示的画面中的至少部分图像。由于目标设备所显示的画面的内容来源非常广泛,可能是目标设备中某个应用程序的界面,也可能是目标设备中多个应用程序的界面的叠加,比如该画面可能是某正在全屏播放的视频中的一帧画面,也可能是相册中包括多个相片的列表。因此,第一图像作为该画面中的一部分,不会受第一设备410中某个应用程序的限制或第一设备410本身的限制,与图文编辑程序的内置调色板或内置纹理库所能提供的非常有限的色彩特征或纹理特征相比,第一图像可能包括的第一特征是极其灵活且种类繁多的,从而极大地提高了获取图像特征的灵活性和多样性。比如,用户可以在第一设备410的显示屏中打开喜欢的某个照片,使得该显示屏当前显示的画面中包括该照片,然后再获取第一图像并从第一图像获取第一特征,即能够快速地从用户喜欢的图像中获取图像特征,能够充分满足用户需求。
下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
请参照图5,为本申请实施例所提供的一种获取图像特征的方法的流程图。需要说明的是,该方法并不以图5以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法可以用于如图4中第一设备或第一设备与第二设备的交互中,包括如下步骤:
S501,第一设备接收第一获取指令。
其中,第一获取指令用于指示第一设备获取图像特征。
需要说明的是,图像特征可以为图像在视觉上的特点,该图像特征可以用于对文本或图像进行编辑。当基于该图像特征对文本或图像进行编辑时,可以使得该文本或图像具有该图像特征。
在一些实施例中,图像特征的特征类型可以包括色彩类型和纹理类型。当然,在实际应用中,图像特征的特征类型还可以包括其他的特征类型,比如形状类型和空间关系类型中的至少一个。
第一设备可以通过人机交互界面,向用户提供用于触发获取图像特征的控件,并基于该控件接收用户所提交的第一获取指令。
在一些实施例中,第一设备的显示屏为触控屏,用户可以通过手指或者触控笔在屏幕上点击或滑动,从而与第一设备进行交互。如图6所示,第一设备的左下角包括“获取图像特征”按钮,当第一设备基于该按钮接收到点击操作时,可以确定接收到第一获取指令。
在一些实施例中,由于图像特征的类型可以包括多种,因此为了提高获取图像特征的准确性,第一设备可以接收用户提交第三设置指令,第三设置指令用于指示目标特征类型。在一些实施例中,目标特征类型可以包括色彩类型或纹理类型。在另一些实施例中,目标特征类型也可以携带在第一获取指令中。第一设备在基于如图6中的“获取图像特征”接收到点击操作时,可以继续向用户提供用于确定特征类型的二级菜单如图7所示,该二级菜单中包括多种待选择的特征类型,当第一设备基于任一特征类型接收到用户的点击操作时,即确定该特征类型为用户所选择的目标特征类型。
在一些实施例中,为了使得用户可以从其他电子设备获取图像特征并应用至第一设备,进一步提高获取图像特征的范围和灵活性,第一设备可以接收用户提交的第一设置指令或第二设置指令,其中,第一设置指令中可以携带第一设备的设备标识,用于指示获取图像特征的目标设备为第一设备,第二设置指令中可以携带第二设备的设备标识,用于指示获取图像特征的目标设备为第二设备。在另一些实施例中,第一设备的设备标识或第二设备的设备标识也可以携带在第一获取指令中。第一设备在基于如图6中的“获取图像特征”接收到用户的点击操作,或者在基于如图7所示的二级菜单确定用户所选择目标特征类型时,可以继续显示如图8所示的设备选择界面,该设备选择界面包括至少一个设备标识,当基于任一设备标识接收到用户的点击操作时,可以确定该设备标识所对应的电子设备为目标设备。例如,当第一设备基于第一设备的设备标识接收到点击操作时,可以确定接收到第一设置指令,第一设备为目标设备;当第一设备基于第二设备的设备标识接收到点击操作时,可以确定接收到第二设置指令,第二设备为目标设备。
其中,第二设备可以是与第一设备相关联的设备。
需要说明的是,为了以向用户提供更加个性化的图像特征获取方式,进一步提高获取图像特征的灵活性和准确性,在实际应用中,第一设备可以接收其他更多的用于指示获取图像特征方式的指示信息,这些指示信息可以分别由单独的设置指令来指示,也可以都携带在第一获取指令中。
还需要说明的是,本申请实施例对第一设备接收用于指示获取图像特征方式的指示信息的方式不做具体限定。
S502,第一设备判断是否需要跨设备获取图像特征。如果是则执行S506,否则执行S503。
第一设备为了确定后续是从本端还是其他设备来获取图像特征,进而采取对应的 获取方法,第一设备可以判断是否需要跨设备获取图像特征。
在一些实施例中,第一设备若未接收到用户提交的任何设备标识,或者接收到用户提交的设备标识为第一设备的设备标识,则可以确定不需要跨设备获取图像特征。相应的,第一设备若接收到用户提交的设备标识不为第一设备的设备标识,则确定需要从所接收的设备标识对应的第二设备,跨设备提取图像特征。
在一些实施例中,第一设备可以判断第一设置指令、第二设置指令或第一获取指令中是否携带设备标识。如果第一设置指令或第一获取指令中未携带任何设备标识或者所携带的设备标识为第一设备的设备标识,则可以确定不需要跨设备提取图像特征。如果第二设置指令或第一获取指令中携带设备标识,且该设备标识不为第一设备的设备标识,则需要从所接收的设备标识对应的第二设备,跨设备提取图像特征。
需要说明的是,在实际应用中,第一设备也可以被配置为只从本端获取图像特征或者只从第二设备获取图像特征,因此也可以不执行S502,即S502是可选的步骤。
S503,第一设备创建第一窗口。
由于第一设备的显示屏中可能包括第二窗口和第三窗口等多个窗口,每个窗口可能归属于不同的应用程序,如前述图6-图8所示,该显示屏的左侧可以为绘画程序的窗口,右侧可以为相册的窗口,所以为了避免第一设备将获取第一图像的操作与其他操作(比如针对相册的操作)相混淆,提高获取第一图像的可靠性,第一设备可以通过前述中的窗口管理服务模块创建第一窗口。
其中,第一窗口的尺寸可以与第一设备的显示屏的尺寸相同,且第一窗口为位于显示屏所显示的其他窗口上层的透明窗口,即第一窗口为位于第一设备所有应用程序上层的全局性透明窗口。
需要说明的是,第一窗口的透明度可以是第一设备预先接收相关技术人员提交得到的,也可以是在创建第一窗口之前,接收用户提交得到的。
可选地,第一窗口的透明度可以为100%。当然,在实际应用中,第一窗口的透明度还可以为其他数值,本申请实施例对第一窗口的透明度不做具体限定。
例如,一种显示场景的示意图可以如图9所示。该场景包括处于顶层的第一窗口901,第一窗口为全局性透明窗口,且透明度为100%,在第一窗口下层,为第一设备原始的显示界面,包括第二窗口902和第三窗口903,其中,第二窗口902为如图6-图8中所示的绘画程序的窗口,第三窗口为如图6-图8中所示的相册的窗口。
需要说明的是,在实际应用中,第一设备也可以通过其他方式来获取第一图像,因此也可以不执行S503,即S503是可选的步骤。
S504,第一设备获取第一图像。
第一图像可以为第一设备的显示屏当前显示的画面中的至少部分图像。
在一些实施例中,第一设备可以基于第一截图操作,从第一设备的显示屏当前显示的画面中获取第一图像。在一些实施例中,在第一设备创建了第一窗口的情况下,第一设备可以基于第一窗口接收第一截图操作。
在一些实施例中,用户可以通过第一截图操作设置所要获取的第一图像的区域范围。第一设备可以基于第一截图操作,在第一设备的显示屏确定第一封闭区域,第一封闭区域内的图像即为用户需要获取的第一图像。
在一些实施例中,第一截图操作可以用于直接确定第一封闭区域。第一截图操作可以包括滑动操作。相应的,第一设备可以将由该滑动操作的滑动轨迹构成的封闭区域,确定为第一封闭区域。在一些实施例中,该封闭区域可以是由该滑动轨迹构成的最大封闭区域或最小封闭区域。也即是,用户可以通过在显示屏上滑动,灵活准确地获取任意大小任意形状的第一图像。
例如,如图10所示,第一设备的显示屏的右上方的照片包括了两侧的河岸以及上方跳跃的人,用户通过触控笔在该照片的右下角绘制出一个不规则的第一封闭区域1001,第一封闭区域1001中包括右侧的河岸。
需要说明的是,若滑动操作的滑动轨迹并未构成封闭区域,则可以将该滑动轨迹的首尾相连接,从而得到一个封闭区域。
在一些实施例中,第一截图操作可以用于确定第一封闭区域在显示屏中的第一位置,预设的第一边框可以用于确定第一封闭区域的大小和形状。相应地,第一设备可以基于第一截图操作,确定第一位置,将处于第一位置处的第一边框内的区域确定为第一封闭区域。由于不需要用户绘制第一封闭区域,因此能够降低获取第一图像的难度。
例如,如图11所示,第一边框为直径为3厘米的圆形框。第一设备的显示屏的右下方的照片包括了一个人的半身照,用户通过在显示屏上点击或滑动,点击的位置或滑动轨迹的终点的位置及为第一位置,第一设备在第一位置处生成直径为3厘米的圆形框,该运行框内的区域即为第一封闭区域1001,第一封闭区域1001中包括人的头像。
需要说明的是,第一边框(包括大小和形状)可以通过事先设置确定。在一些实施例中,第一设备可以事先向用户提供多个不同的边框,并在基于任一边框接收到用户的选择操作时,将该边框确定为第一边框。本申请实施例对此第一边框的大小、形状以及设置方式均不作具体限定。
还需要说明的是,在实际应用中,第一截图操作可以也包括其他方式的操作,只要能够确定第一封闭区域即可,本申请实施例对此第一截图操作的操作方式不做具体限定。
第一设备在确定第一封闭区域之后,可以基于第一封闭区域获取第一图像。
在一些实施例中,第一设备可以截取第一设备的显示屏当前显示的画面为第二图像,并基于第一封闭区域对第二图像进行裁剪,得到第一图像。也即是,第一设备可以先对第一设备的显示屏的整个屏幕进行截屏,然后按照第一封闭区域从截屏得到的第二图像中裁剪出第一图像。
在一些实施例中,第一设备可以从第一设备的显示屏当前显示的画面截取第一封闭区域中的第一图像。其中,第一设备可以通过图层合成模块,基于各窗口分别与第一封闭区域的位置关系,确定与第一封闭区域匹配的至少一个窗口的画面,并按照这至少一个窗口之间的上下层级关系,将至少一个窗口的画面合成为第一图像。
需要说明的是,根据第一封闭区域确定第一图像,可以比第二图像包括更少的图像数据,能够使得后续提取第一特征所需要分析的数据更少更准确,能够提高获取到第一特征效率和准确性。当然,在实际应用中,也可以获取第二图像来用于后续获取 第一特征。
在一些实施例中,若第一设备在获取第一图像之前创建了第一窗口,则第一设备可以在获取到第一图像之后,关闭第一窗口,使得用户后续能够继续与其他窗口进行交互。
S505,第一设备获取第一特征。之后,第一设备可以执行S511。
在获取到第一图像之后,第一设备可以对第一图像进行分析处理,从而提取到第一特征。由于第一图像是第一设备的显示屏当前显示的画面中的至少部分图像,而该画面不会受到某个应用程序的限制,相应的,第一图像也不会受第一设备中某个应用程序的限制,使得能够从预设的图像特征库之外的来源获取到第一特征,比如图文编辑应用程序的界面之外的区域等,因此提高了获取到图像特征的灵活性和多样性,能够充分满足用户需求。另外,与用户从第一设备之外向第一设备上传图像特征的方式相比,操作更加简单。
在一些实施例中,第一设备通过第三设置指令或第一获取指令,获取到了用户所指定的目标特征类型,那么第一设备可以基于该目标特征类型,对第一图像进行处理,从而得到该目标特征类型的第一特征。
例如,第一获取指令中携带的特征类型为色彩类型,则第一设备可以对第一图像的色彩进行类型分析,所得到的第一特征为色彩类型的特征,如红绿蓝(red green blue,RGB)值;第一获取指令中携带的特征类型为纹理类型,则第一设备可以对第一图像的纹理进行类型分析,所得到的第一特征为纹理类型的特征。
在一些实施例中,若第一设备未获取到用户所指定的特征类型,则可以基于至少一种特征类型,对第一图像进行处理,从而得到至少一种类型的第一特征。
其中,对于颜色类型,第一设备可以通过颜色直方图、颜色集、颜色矩、颜色聚合向量或颜色相关图等方式,对第一图像进行分析;对于纹理类型,第一设备可以通过统计法、几何法、模型法或信号处理法等方式,对第一图像进行分析,或者也可以对第一图像进行模糊、降噪或添加盐值;对于形状特征,第一设备可以通过边界特征法、傅里叶形状描述法、几何参数法或形状不变矩法等方式,对第一图像进行分析;对于空间关系类型,第一设备可以将第一图像分割为多个图像块,然后提取每个图像块的特征并建立索引。当然,在实际应用中,第一设备也可以通过其他方式,对第一图像进行处理,从而得到第一特征,本申请实施例对从第一图像获取第一特征的方式并不做具体限定。
S506,第一设备向第二设备发送第一获取请求。
如果第一设备确定通过跨设备的方式获取第一特征,则可以通过分布式数据交互通道,向第二设备发送与第一获取指定对应的第一获取请求,从而请求第二设备获取图像特征。
在一些实施例中,第一获取请求中可以携带目标特征类型。
在一些实施例中,第一设备可以与第二设备建立分布式数据交互通道,并通过该分布式数据交互通道与第二设备进行数据交互,包括向第二设备发送第一获取请求以及后续接收第二设备反馈的数据。
在一些实施例中,第二设备在接收到第二设备发送的第一获取请求时,可以显示 第一通知信息,第一通知信息用于通知即将响应第一设备的第一获取请求。
例如,当第二设备接收到第一获取请求时,可以显示如图12所示的界面,该界面的顶部包括第一通知消息,内容为“即将为第一设备提取图像特征”,还包括接受和拒绝的按钮。如果基于接受按钮接收到用户的点击操作,则可以继续执行下述步骤。如果基于拒绝按钮接收到用户的点击操作,则可以停止执行后续操作。
在一些实施例中,第一设备也可以向第二设备发送第二获取请求,第二获取请求第二设备获取用于获取图像特征的图像。
S507,第二设备创建第一窗口。
需要说明的是,第二设备创建第一窗口的方式,可以与S503第一设备创建第一窗口的方式相同,此处不再一一赘述。
S508,第二设备获取第一图像。
需要说明的是,第二设备获取第一图像的方式,可以与S504第一设备获取第一图像的方式相同,此处不再一一赘述。
S509,第二设备获取第一特征。
需要说明的是,第二设备获取第一特征的方式,可以与S505第一设备获取第一特征的方式相同,此处不再一一赘述。
S510,第二设备向第一设备发送第一特征。相应的,第一设备若接收到第二设备反馈的第一特征则可以执行S511。
其中,第二设备可以基于前述中的分布式数据交互通道,向第一设备发送第一特征。
在一些实施例中,若第二设备接收到的是第二获取请求,则可以不执行S509,并在S510中向第一设备反馈第一图像。相应的,第一设备可以在接收到第一图像时执行前述S505,从而提取第一特征。
通过前述S506-S510,第一设备可以从第二设备获取到第一特征,由于第一图像是第二设备的显示屏当前显示的画面中的至少部分图像,而该画面不会受到某个应用程序的限制,相应的,第一图像也不会受第一设备本身的限制,使得第一设备能够从第一设备之外的第二设备获取到第一特征,进一步提高了获取到图像特征的灵活性和多样性,能够充分满足用户需求。例如,用户可以将手机相册中某张照片的色彩或纹理,应用至平板电脑的绘图程序中。
S511,第一设备基于第一特征进行图文编辑操作。
第一设备可以在图文编辑程序中基于第一特征进行图文编辑操作,从而将第一特征应用至新的文本或图像,使得操作后的操作对象具有第一特征。
在一些实施例中,第一设备可以将第一特征与触控笔绑定。若第一设备检测到该触控笔的绘制操作,则将该绘制操作所绘制的文本或图像的图像特征设置为第一特征。
例如,第一特征为RGB值,则第一设备可以将该RGB值与触控笔绑定,当用户通过该触控笔绘制时,所绘制的轨迹的色彩即为该RGB值所指示的色彩。又或者,第一特征为纹理特征,第一设备也可以将该纹理特征与触控笔绑定,当用户通过该触控笔绘制时,所绘制的轨迹的纹理特征即为该触控笔所绑定的纹理特征。
在一些实施例中,第一设备或第二设备也可以不从第一图像获取第一特征,而是 直接将第一图像复制至图文编辑程序。
需要说明的是,在实际应用中,第一设备在获取到第一特征之后,也可以不执行S511来立即应用第一特征,也即是,S511是可选地步骤。比如,在一些实施例中,第一设备可以将获取到的第一特征进行添加到内置调色板或内置纹理图像库等图像特征库中,使得用户下一次可以直接从内置调色板或内置纹理图像库等图像特征库,获取第一特征。
当然,在实际应用中,第一设备也可以按照与第二设备类似的方式,向与第一设备关联的第三设备提供第二特征。
在一些实施例中,第一设备可以接收第三设备的第三获取请求,第三获取请求用于请求从第一设备获取图像特征。相应的,第一设备获取第三图像,第三图像可以为第一设备的显示屏当前显示的画面中的至少部分图像,第一设备从第三图像提取第二特征,并向第三设备反馈第二特征。
在一些实施例中,第一设备可以接收第三设备的第四获取请求,第四获取请求用于请求从第一设备获取用于获取图像特征的图像。相应的,第一设备获取第三图像,并向第三设备反馈第三图像。第三设备从第三图像中提取得到第二特征。
在本申请实施例中,若第一设备接收到用于指示获取图像特征的第一获取指令,则第一设备可以获取第一特征,其中,第一特征为目标设备的第一图像的特征,目标设备可以为第一设备,也可以是与第一设备关联的第二设备,第一图像可以为目标设备的显示屏当前显示的画面中的至少部分图像。由于该画面的内容来源非常广泛,可能是目标设备中某个应用程序的界面,也可能是目标设备中多个应用程序的界面的叠加,比如该画面可能是正在播放的视频中的一帧画面,也可能是相册中包括多个相片的列表,因此,第一图像不会受某个应用程序或第一设备本身的限制,第一图像可能包括的第一特征也是极其灵活且种类繁多的,从而极大地提高了获取图像特征的灵活性和多样性,能够充分满足用户需求。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
基于同一发明构思,本申请实施例还提供了一种电子设备。该电子设备可以为前述中的第一设备、第二设备或第三设备。图13为本申请实施例提供的电子设备1300的结构示意图,如图13所示,本实施例提供的电子设备包括:存储器1310和处理器1320,存储器1310用于存储计算机程序;处理器1320用于在调用计算机程序时执行上述方法实施例所述的方法。
本实施例提供的电子设备可以执行上述方法实施例,其实现原理与技术效果类似, 此处不再赘述。
基于同一发明构思,本申请实施例还提供了一种芯片系统。该所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述方法实施例所述的方法。
其中,该芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述方法实施例所述的方法。
本申请实施例还提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行时实现上述方法实施例所述的方法。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下 文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (15)

  1. 一种获取图像特征的方法,其特征在于,包括:
    第一设备接收第一获取指令,所述第一获取指令用于指示所述第一设备获取图像特征;
    所述第一设备响应所述第一获取指令,获取第一特征,所述第一特征为目标设备的第一图像的特征,所述目标设备为所述第一设备或与所述第一设备关联的第二设备,所述第一图像为所述目标设备的显示屏当前显示的画面中的至少部分图像。
  2. 根据权利要求1所述的方法,其特征在于,所述目标设备为所述第一设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:
    所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像;
    所述第一设备从所述第一图像提取所述第一特征。
  3. 根据权利要求2所述的方法,其特征在于,在所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像之前,所述方法还包括:
    所述第一设备基于所述第一获取指令,创建第一窗口,所述第一窗口的尺寸与所述第一设备的显示屏的尺寸相同,且所述第一窗口为位于所述显示屏所显示的其他窗口上层的透明窗口;
    所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像,包括:
    所述第一设备若基于所述第一窗口接收到所述第一截图操作,则从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像;
    在所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像之后,所述方法还包括:
    所述第一设备关闭所述第一窗口。
  4. 根据权利要求2或3所述的方法,其特征在于,所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像,包括:
    所述第一设备基于所述第一截图操作,在所述第一设备的显示屏确定第一封闭区域;
    所述第一设备基于所述第一封闭区域获取所述第一图像。
  5. 根据权利要求4所述的方法,其特征在于,所述第一设备基于所述第一截图操作,在所述第一设备的显示屏确定第一封闭区域,包括:
    所述第一设备基于所述第一截图操作,确定第一位置,将处于所述第一位置处的第一边框内的区域确定为所述第一封闭区域,所述第一边框为预设的边框;或,
    所述第一截图操作为滑动操作,所述第一设备将由所述滑动操作的滑动轨迹构成的封闭区域,确定为所述第一封闭区域。
  6. 根据权利要求4或5所述的方法,其特征在于,所述第一设备基于所述第一封闭区域获取所述第一图像,包括:
    所述第一设备从所述第一设备的显示屏当前显示的所述画面截取所述第一封闭区 域中的所述第一图像;或,
    所述第一设备截取所述第一设备的显示屏当前显示的所述画面作为第二图像,并基于所述第一封闭区域对所述第二图像裁剪得到所述第一图像。
  7. 根据权利要求2-6任一所述的方法,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:
    所述第一设备接收第一设置指令,所述第一设置指令用于指示所述目标设备为所述第一设备。
  8. 根据权利要求1所述的方法,其特征在于,所述目标设备为所述第二设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:
    所述第一设备向所述第二设备发送第一获取请求,所述第一获取请求与所述第一获取指令对应,所述第一获取请求用于请求从所述第二设备获取图像特征;
    所述第一设备接收所述第二设备反馈的所述第一特征。
  9. 根据权利要求1所述的方法,其特征在于,所述目标设备为所述第二设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:
    所述第一设备向所述第二设备发送第二获取请求,所述第二获取请求与所述第一获取指令对应,所述第二获取请求用于请求从所述第二设备获取图像;
    所述第一设备接收所述第二设备反馈的所述第一图像;
    所述第一设备从所述第一图像提取所述第一特征。
  10. 根据权利要求8或9所述的方法,其特征在于,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:
    所述第一设备接收第二设置指令,所述第二设置指令用于指示所述目标设备为所述第二设备。
  11. 根据权利要求1-10任一所述的方法,其特征在于,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:
    所述第一设备接收第三设置指令,所述第三设置指令用于指示获取图像特征的目标特征类型;
    所述第一设备响应所述第一获取指令,获取第一特征,包括:
    所述第一设备基于所述目标特征类型,获取所述第一特征。
  12. 根据权利要求11所述的方法,其特征在于,所述目标特征类型包括色彩类型或纹理类型。
  13. 一种获取图像特征的方法,其特征在于,包括:
    第二设备接收第一设备发送的第一获取请求,所述第一获取请求用于请求从所述第二设备获取图像特征;
    所述第二设备获取第一图像,所述第一图像为所述第二设备的显示屏当前显示的画面中的至少部分图像;
    所述第二设备从所述第一图像提取第一特征;
    所述第二设备向所述第一设备反馈所述第一特征。
  14. 一种电子设备,其特征在于,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器用于在调用所述计算机程序时执行如权利要求1-12任一项或如 权利要求13所述的方法。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-12任一项或如权利要求13所述的方法。
PCT/CN2022/085325 2021-06-25 2022-04-06 获取图像特征的方法及电子设备 WO2022267617A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22827120.1A EP4343520A1 (en) 2021-06-25 2022-04-06 Image feature obtaining method and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110713551.2A CN115525183A (zh) 2021-06-25 2021-06-25 获取图像特征的方法及电子设备
CN202110713551.2 2021-06-25

Publications (1)

Publication Number Publication Date
WO2022267617A1 true WO2022267617A1 (zh) 2022-12-29

Family

ID=84545203

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085325 WO2022267617A1 (zh) 2021-06-25 2022-04-06 获取图像特征的方法及电子设备

Country Status (3)

Country Link
EP (1) EP4343520A1 (zh)
CN (1) CN115525183A (zh)
WO (1) WO2022267617A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015043382A1 (zh) * 2013-09-30 2015-04-02 北京奇虎科技有限公司 一种适用于触屏设备的截图装置和方法
CN105242920A (zh) * 2015-09-21 2016-01-13 联想(北京)有限公司 一种截图系统、截图方法以及电子设备
CN106033305A (zh) * 2015-03-20 2016-10-19 广州金山移动科技有限公司 一种屏幕取色方法及装置
US20170315772A1 (en) * 2014-11-05 2017-11-02 Lg Electronics Inc. Image output device, mobile terminal, and control method therefor
CN109299310A (zh) * 2018-12-05 2019-02-01 王相军 一种屏幕图像取色和搜索方法及系统
CN111596848A (zh) * 2020-05-09 2020-08-28 远光软件股份有限公司 一种界面取色方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015043382A1 (zh) * 2013-09-30 2015-04-02 北京奇虎科技有限公司 一种适用于触屏设备的截图装置和方法
US20170315772A1 (en) * 2014-11-05 2017-11-02 Lg Electronics Inc. Image output device, mobile terminal, and control method therefor
CN106033305A (zh) * 2015-03-20 2016-10-19 广州金山移动科技有限公司 一种屏幕取色方法及装置
CN105242920A (zh) * 2015-09-21 2016-01-13 联想(北京)有限公司 一种截图系统、截图方法以及电子设备
CN109299310A (zh) * 2018-12-05 2019-02-01 王相军 一种屏幕图像取色和搜索方法及系统
CN111596848A (zh) * 2020-05-09 2020-08-28 远光软件股份有限公司 一种界面取色方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115525183A (zh) 2022-12-27
EP4343520A1 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
US11922005B2 (en) Screen capture method and related device
KR102135215B1 (ko) 정보 처리 방법 및 단말
US20210294429A1 (en) Apparatus, method and recording medium for controlling user interface using input image
EP3547218B1 (en) File processing device and method, and graphical user interface
CN111240673B (zh) 互动图形作品生成方法、装置、终端及存储介质
EP3195601B1 (en) Method of providing visual sound image and electronic device implementing the same
EP3693837A1 (en) Method and apparatus for processing multiple inputs
CN107368810A (zh) 人脸检测方法及装置
CN114115619A (zh) 一种应用程序界面显示的方法及电子设备
US20230367464A1 (en) Multi-Application Interaction Method
WO2021169466A1 (zh) 信息收藏方法、电子设备及计算机可读存储介质
US20240193203A1 (en) Presentation Features for Performing Operations and Selecting Content
CN116095413B (zh) 视频处理方法及电子设备
WO2023236794A1 (zh) 一种音轨标记方法及电子设备
US9195310B2 (en) Camera cursor system
CN109725806A (zh) 站点编辑方法及装置
CN115700461A (zh) 投屏场景下的跨设备手写输入方法、系统和电子设备
WO2022267617A1 (zh) 获取图像特征的方法及电子设备
KR102076629B1 (ko) 휴대 장치에 의해 촬영된 이미지들을 편집하는 방법 및 이를 위한 휴대 장치
CN111626233B (zh) 一种关键点标注方法、系统、机器可读介质及设备
CN107885571A (zh) 显示页面控制方法及装置
WO2024125301A1 (zh) 显示方法和电子设备
CN116095412B (zh) 视频处理方法及电子设备
AU2015255305B2 (en) Facilitating image capture and image review by visually impaired users
KR20210101183A (ko) 입력 영상을 이용한 사용자 인터페이스 제어 방법, 장치 및 기록매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827120

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022827120

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 18572791

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2022827120

Country of ref document: EP

Effective date: 20231219

NENP Non-entry into the national phase

Ref country code: DE