WO2022267617A1 - 获取图像特征的方法及电子设备 - Google Patents
获取图像特征的方法及电子设备 Download PDFInfo
- Publication number
- WO2022267617A1 WO2022267617A1 PCT/CN2022/085325 CN2022085325W WO2022267617A1 WO 2022267617 A1 WO2022267617 A1 WO 2022267617A1 CN 2022085325 W CN2022085325 W CN 2022085325W WO 2022267617 A1 WO2022267617 A1 WO 2022267617A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature
- display screen
- window
- target
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 230000004044 response Effects 0.000 claims abstract description 18
- 238000004590 computer program Methods 0.000 claims description 24
- 239000000284 extract Substances 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 16
- 230000003993 interaction Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 241001422033 Thestylus Species 0.000 description 7
- 239000000203 mixture Substances 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04812—Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0483—Interaction with page-structured environments, e.g. book metaphor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04803—Split screen, i.e. subdividing the display area or the window area into separate subareas
Definitions
- the present application relates to the field of terminals, in particular to a method for acquiring image features and electronic equipment.
- graphic and text editing functions such as handwriting and drawing enable users to handwrite text or draw images on the screen of an electronic device, which is convenient for users to record information or perform artistic creation.
- image-text editing function the user may need a brush of a specific color, or need to draw a specific texture, and accordingly, the electronic device needs to provide image features such as these colors and textures to the user.
- an electronic device may provide a user with a preset image feature library, and the image feature library may include various preset image features.
- the electronic device receives the user's selection operation based on any image feature, it determines that the image feature is the image feature selected by the user, and then can perform other subsequent operations such as graphic editing based on the image feature selected by the user.
- the image features included in the preset image feature library are usually very limited. Taking color features as an example, electronic devices may only provide users with several commonly used color features, but in fact, the variety of color features may be infinite. Endless, therefore, it is obvious that this way of providing image features in the prior art has relatively large limitations, and it is difficult to meet user needs.
- the present application provides a method for acquiring image features and an electronic device, which can improve the flexibility and diversity of acquiring image features.
- the embodiment of the present application provides a method for acquiring image features, including:
- the first device receives a first acquisition instruction, where the first acquisition instruction is used to instruct the first device to acquire image features;
- the first device acquires a first feature in response to the first acquisition instruction, the first feature is a feature of a first image of a target device, and the target device is the first device or is related to the first device
- the first image is at least a part of the image currently displayed on the display screen of the target device.
- the image feature is a visual feature of the image, and the image feature can be used to edit the text or the image.
- the text or image can be made to have the image feature.
- the association between the first device and the second device may mean that the first device and the second device are or can be connected through communication.
- the first device and the second device may be devices currently connected through short-range communication technology.
- the first device and the second device may be devices corresponding to the same user identifier.
- the first device may be user A's tablet computer, and the second device may be user A's mobile phone.
- the first device may acquire the first feature, where the first feature is a feature of the first image of the target device.
- the target device may be the first device, or may be a second device associated with the first device, and the first image may be at least part of an image currently displayed on a display screen of the target device. Since the content of this screen comes from a wide range of sources, it may be the interface of an application program on the target device, or it may be the superposition of the interfaces of multiple application programs on the target device.
- the screen may be a frame of a video being played , it may also be that the album includes a list of multiple photos, therefore, the first image will not be limited by a certain application program or the first device itself, and the first features that the first image may include are also extremely flexible and diverse, Therefore, the flexibility and diversity of acquiring image features are greatly improved, and the needs of users can be fully met.
- the first feature may be a color-type feature or a texture-type feature.
- the target device is the first device, and the first device acquires the first feature in response to the first acquisition instruction, including:
- the first device acquires the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation;
- the first device extracts the first feature from the first image.
- the first image is at least part of the image currently displayed on the display screen of the first device, and the screen will not be restricted by a certain application program, correspondingly, the first image will not be restricted by a certain application program in the first device.
- the limitation of the application program makes it possible to obtain the first feature from sources other than the preset image feature library, such as the area outside the interface of the graphic editing application program, etc., thus improving the flexibility and variety of image feature acquisition , which can fully meet the needs of users.
- the operation is simpler.
- the method further includes:
- the first device creates a first window based on the first acquisition instruction, the size of the first window is the same as the size of the display screen of the first device, and the first window is located on the display screen A transparent window on top of other displayed windows;
- the first device acquires the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation, including:
- the first device receives the first screenshot operation based on the first window, acquire the first image from the screen currently displayed on the display screen of the first device;
- the method further includes:
- the first device closes the first window.
- each window may belong to a different application program.
- the left side of the display screen may be a drawing program window
- the right side may be The window of the photo album, so in order to prevent the first device from confusing the operation of obtaining the first image with other operations (such as operations on the photo album) and improve the reliability of obtaining the first image, the first device may create the first window.
- the above-mentioned interface including multiple windows such as the second window and the third window together constitute the screen currently displayed on the display screen of the second device.
- the transparency of the first window may be obtained by the first device receiving a submission from a relevant technical person in advance, or may be obtained from a user's submission before creating the first window.
- the transparency of the first window may be 100%.
- the transparency of the first window may also be other values, and this embodiment of the present application does not specifically limit the transparency of the first window.
- the first device acquires the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation, including:
- the first device determines a first enclosed area on the display screen of the first device based on the first screenshot operation
- the first device acquires the first image based on the first enclosed area.
- the first device determines the first enclosed area on the display screen of the first device based on the first screenshot operation, including:
- the first device determines a first position based on the first screenshot operation, and determines an area within a first border at the first position as the first closed area, and the first border is a preset the border of the ; or,
- the first screenshot operation is a sliding operation, and the first device determines a closed area formed by a sliding track of the sliding operation as the first closed area.
- the user can flexibly and accurately obtain the first image of any size and shape by sliding on the display screen.
- the first enclosed area may be the largest enclosed area or the smallest enclosed area formed by the sliding track.
- the ends of the slide tracks may be connected to obtain a closed area.
- the first frame (including size and shape) can be determined by setting in advance.
- the first device may provide a plurality of different frames to the user in advance, and when a user's selection operation is received based on any frame, the frame is determined as the first frame.
- the embodiment of the present application does not specifically limit the size, shape, and setting manner of the first frame.
- the acquiring the first image by the first device based on the first enclosed area includes:
- the first device intercepts the first image in the first closed area from the frame currently displayed on the display screen of the first device; or,
- the first device captures the frame currently displayed on the display screen of the first device as a second image, and crops the second image based on the first closed area to obtain the first image.
- the determination of the first image according to the first closed area can include less image data than the second image, which can make the data required for subsequent extraction of the first feature less and more accurate, and can improve the efficiency and efficiency of acquiring the first feature. accuracy.
- the second image may also be acquired for subsequent acquisition of the first feature.
- the target device is the second device
- the first device acquires the first feature in response to the first acquisition instruction, including:
- the first device sends a first acquisition request to the second device, the first acquisition request corresponds to the first acquisition instruction, and the first acquisition request is used to request to acquire image features from the second device ;
- the first device receives the first characteristic fed back by the second device.
- the first image is at least a part of the image currently displayed on the display screen of the second device, and the screen will not be restricted by an application program, correspondingly, the first image will not be restricted by the first device itself , so that the first device can obtain the first feature from a second device other than the first device, which further improves the flexibility and diversity of image features obtained, and can fully meet user needs.
- a user can apply the color or texture of a photo in the camera roll of the mobile phone to the drawing program of the tablet computer.
- the first device may communicate with the second device through the distributed data interaction channel.
- the first acquisition request may also carry the target feature type.
- the target device is the second device
- the first device acquires the first feature in response to the first acquisition instruction, including:
- the first device sends a second acquisition request to the second device, the second acquisition request corresponds to the first acquisition instruction, and the second acquisition request is used to request to acquire an image from the second device;
- the first device receives the first image fed back by the second device
- the first device extracts the first feature from the first image.
- the method before the first device acquires the first feature in response to the first acquisition instruction, the method further includes:
- the first device receives a first setting instruction, where the first setting instruction is used to indicate that the target device is the first device; or,
- the first device receives a second setting instruction, where the second setting instruction is used to indicate that the target device is the second device.
- the first device may also determine that the target device is the first device or the second device in other ways, or the first device may also be configured to obtain image features only from the first device or the second device, so that there is no need to It is determined whether the target device is the first device or the second device.
- the method before the first device acquires the first feature in response to the first acquisition instruction, the method further includes:
- the first device receives a third setting instruction, and the third setting instruction is used to indicate a target feature type for acquiring image features;
- the first device acquires the first feature, including:
- the first device acquires the first feature based on the target feature type.
- the target feature type includes color type or texture type.
- the first device can accurately extract the first feature of the target feature type from the first image.
- the first device may process the first image based on at least one feature type, so as to obtain at least one type of first feature.
- the method also includes:
- the first device receives a third acquisition request from an associated third device, and the third acquisition request is used to request to acquire image features from the first device;
- the first device acquires a third image, and the third image is at least part of an image currently displayed on a display screen of the first device;
- said first device extracts a second feature from said third image
- the first device feeds back the second feature to the third device.
- the first device may also act as a provider of the image feature, thereby providing the second feature to the associated third device. And in some embodiments, the first device may send the third image to the third device, and correspondingly, the third device processes the third image to obtain the second feature.
- the first device performs an image-text editing operation based on the first feature.
- the first device may perform graphic editing operations based on the first feature in the graphic editing program, so as to apply the first feature to new text or images, so that the operated text or image has the first feature.
- the first device may add the obtained first feature to the image feature library such as the built-in palette or the built-in texture image library, so that the user can directly select from the built-in palette or the built-in texture image next time library and other image feature libraries to obtain the first feature.
- the image feature library such as the built-in palette or the built-in texture image library
- the first device or the second device may not obtain the first feature from the first image, but directly copy the first image to a graphic editing program.
- the embodiment of the present application provides a method for acquiring image features, including:
- the second device receives a first acquisition request sent by the first device, where the first acquisition request is used to request to acquire image features from the second device;
- the second device acquires a first image, and the first image is at least part of an image currently displayed on a display screen of the second device;
- said second device extracts said first feature from said first image
- the second device feeds back the first feature to the first device.
- the acquiring the first image by the second device includes:
- the second device acquires the first image from the screen currently displayed on the display screen of the second device.
- the method further includes:
- the second device creates a first window based on the first acquisition request, the size of the first window is the same as the size of the display screen of the second device, and the first window is located on the display screen A transparent window on top of other displayed windows;
- the second device acquires the first image from the screen currently displayed on the display screen of the second device based on the first screenshot operation, including:
- the second device receives the first screenshot operation based on the first window, acquire the first image from the screen currently displayed on the display screen of the second device;
- the method further includes:
- the second device closes the first window.
- the second device acquires the first image from the screen currently displayed on the display screen of the second device based on the first screenshot operation, including:
- the second device determines a first enclosed area on the display screen of the second device based on the first screenshot operation
- the second device acquires the first image based on the first enclosed area.
- the second device determines the first enclosed area on the display screen of the second device based on the first screenshot operation, including:
- the second device determines a first position based on the first screenshot operation, and determines an area within a first frame at the first position as the first closed area, and the first frame is a preset the border of the ; or,
- the first screenshot operation is a sliding operation
- the second device determines a closed area formed by a sliding track of the sliding operation as the first closed area.
- the acquiring the first image by the second device based on the first enclosed area includes:
- the second device intercepts the first image in the first enclosed area from the frame currently displayed on the display screen of the second device; or,
- the second device captures the frame currently displayed on the display screen of the second device as a second image, and crops the second image based on the first closed area to obtain the first image.
- the second device may also acquire and feed back the first image to the first device when receiving the second acquisition request sent by the first device, and the first device extracts the first feature from the first image.
- the embodiment of the present application provides a device for acquiring image features, the device can be set in an electronic device, and the device can be used to perform any one of the first aspect and/or any one of the second aspect method described in the item.
- the device may include a hand-painted brush engine module.
- the hand-painted brush engine module can be used for interaction between the electronic device and the user, such as triggering the electronic device to acquire image features according to the method provided by the embodiment of the present application.
- the device may include a window management service module.
- the window management service module can be used to manage the life cycle of each window in the electronic device, detect touch events for each window, and so on.
- the electronic device can create and close the first window through the window management service module.
- the device may include a layer composition module.
- the layer compositing module can be used to synthesize the pictures obtained from multiple windows into one image, and thus can be used to obtain the first image or the second image.
- the device may include a distributed task scheduling module.
- the distributed task scheduling module can be used for electronic devices to call services from other devices through distributed data interaction channels.
- an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory is used to store a computer program; the processor is used to execute any one of the above-mentioned first aspect and/or the second method when calling the computer program. The method described in any one of the aspects.
- an embodiment of the present application provides a chip system, the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement any of the above-mentioned first aspects.
- the processor executes a computer program stored in the memory to implement any of the above-mentioned first aspects.
- the chip system may be a single chip, or a chip module composed of multiple chips.
- the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, any one of the above-mentioned first aspect and/or any one of the second aspect can be realized. described method.
- the embodiment of the present application provides a computer program product, which, when the computer program product is run on the electronic device, enables the electronic device to execute any one of the above-mentioned first aspect and/or any one of the second aspect. method.
- FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- FIG. 2 is a software structural block diagram of an electronic device provided in an embodiment of the present application.
- FIG. 3 is a flow chart of a method for editing images and texts provided in the embodiment of the present application.
- Fig. 4 is a block diagram of a system for acquiring image features provided by an embodiment of the present application
- FIG. 5 is a flow chart of a method for acquiring image features provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of a display interface of an electronic device provided in an embodiment of the present application.
- FIG. 7 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
- FIG. 8 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
- FIG. 9 is a schematic diagram of a display scene provided by an embodiment of the present application.
- Fig. 10 is a schematic diagram of a first enclosed area provided by the embodiment of the present application.
- Fig. 11 is a schematic diagram of another first enclosed area provided by the embodiment of the present application.
- FIG. 12 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
- FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- the method for acquiring image features provided by the embodiment of the present application can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, notebook computers, ultra-mobile personal computers (ultra-mobile personal computers, UMPCs), netbooks, personal digital assistants (personal digital assistants) digital assistant, PDA) and other electronic devices, the embodiments of the present application do not impose any restrictions on the specific types of electronic devices.
- FIG. 1 is a schematic structural diagram of an example of an electronic device 100 provided by an embodiment of the present application.
- the electronic device 100 may include a processor 110, a memory 120, a communication module 130, a display screen 140, and the like.
- the processor 110 may include one or more processing units, and the memory 120 is used for storing program codes and data.
- the processor 110 may execute computer-executed instructions stored in the memory 120 for controlling and managing the actions of the electronic device 100 .
- the communication module 130 may be used for communication between various internal modules of the electronic device 100, or communication between the electronic device 100 and other external electronic devices, and the like. Exemplarily, if the electronic device 100 communicates with other electronic devices through a wired connection, the communication module 130 may include an interface, etc., such as a USB interface. USB interface, USB Type C interface, etc.
- the USB interface can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
- the communication module 130 may include an audio device, a radio frequency circuit, a Bluetooth chip, a wireless fidelity (Wi-Fi) chip, a near-field communication (near-field communication, NFC) module, etc.
- Wi-Fi wireless fidelity
- NFC near-field communication
- the display screen 140 may display images or videos in the human-computer interaction interface.
- the electronic device 100 may further include a pressure sensor 150 for sensing a pressure signal, and may convert the pressure signal into an electrical signal.
- the pressure sensor 150 can be disposed on the display screen.
- pressure sensors 150 such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
- a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of pressure according to the change in capacitance. When a touch operation acts on the display screen, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 150 .
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 150 .
- touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
- the electronic device 100 may also include a peripheral device 160, such as a mouse, a keyboard, a speaker, a microphone, a stylus, and the like.
- a peripheral device 160 such as a mouse, a keyboard, a speaker, a microphone, a stylus, and the like.
- the embodiment of the present application does not specifically limit the structure of the electronic device 100 .
- the electronic device 100 may also include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
- the illustrated components can be realized in hardware, software or a combination of software and hardware.
- the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture. Please refer to FIG. 2 .
- FIG. 2 is a software structural block diagram of the electronic device 100 according to the embodiment of the present application.
- the electronic device 100 may include an application layer 210 and a system layer 220 .
- Application layer 210 may include a series of application packages.
- the application package may include graphic editing applications such as document editing applications and drawing applications.
- Graphic editing applications can be used to edit text or images, such as generating text, modifying text styles, or drawing images.
- the application layer 210 may include a built-in palette 211 and a built-in texture image library 212 .
- the built-in color palette 211 may include multiple preset color features.
- the built-in texture image library 212 may include a plurality of texture features that are preset or uploaded by the user in advance.
- the application layer 210 may include a hand-painted brush engine module 213 .
- the hand-painted brush engine module 213 may be used for interaction between the electronic device 100 and the user, such as triggering the electronic device 100 to acquire image features according to the method provided by the embodiment of the present application.
- the system layer 220 may include a window management service module 221 and a layer composition module 222 .
- the window management service module 221 can be used to manage the life cycle of each window in the electronic device 100 and detect touch events on each window and so on.
- the touch event may include touch coordinates, pressure values, and the like.
- the layer compositing module 222 can be used for compositing the frames obtained from multiple windows into one image.
- the system layer 220 may also include a distributed task scheduling module 223 .
- the distributed task scheduling module 223 can be used for the electronic device 100 to call services from other devices through the distributed data interaction channel.
- system layer 220 may also include an image rendering module.
- the image drawing module can be used to draw images on the display screen 140 .
- the graphic editing function is an important function of electronic equipment. Users can edit text or images on electronic devices through the graphic editing function. In the process of graphic text editing, users usually need to personalize the text or image, such as setting the text or image to a specific color, or drawing a specific texture on a certain area of the text or image.
- FIG. 3 is a flow chart of a method for editing images and texts provided by the embodiment of the present application.
- the user touches the display screen 140 of the electronic device 100 .
- the user can interact with the electronic device 100 by touching the display screen 140 of the electronic device 100 with a body or a stylus, such as selecting text or image areas that need to be colored or textured.
- the electronic device 100 processes a touch event through the system layer 220 to obtain a touch event object.
- the electronic device 100 can process the touch event through the system layer 220 , encapsulate the coordinates and pressure values of the touch event into a touch event object, and provide the touch event object to the application layer 210 .
- the electronic device 100 performs corresponding logic processing based on the touch event object through the application layer 210.
- the graphics and text editing program (such as a drawing program or a document program) of the application layer 210 in the electronic device 100 can perform logic processing inside the application program after obtaining the touch event object, such as determining that the user opens the built-in palette 211, Determine the color selected by the user in the built-in palette 211 , determine that the user opens the built-in texture image library 212 or determine the texture feature selected by the user in the built-in texture image library.
- the electronic device 100 performs graphic and text editing operations through the system layer 220 .
- the electronic device 100 can perform image-text editing operations through the system layer 220 , and display image-text editing results on the display screen 140 .
- image-text editing operations through the system layer 220 , and display image-text editing results on the display screen 140 .
- the editing operation on an image if the image is to be colored, the image can be dyed based on the color features determined from the built-in palette 211; Texture feature, draw the corresponding texture.
- the electronic device can only provide the user with color features through the built-in palette and texture features with the built-in texture image feature library in the process of implementing the above-mentioned image-text editing method.
- image feature libraries such as built-in color palettes and built-in texture image feature libraries are usually set in advance by developers of graphics and text editing programs, the image features included therein are quite limited and difficult to meet user needs.
- embodiments of the present application provide a system and method for acquiring image features.
- FIG. 4 is a block diagram of a system for acquiring image features provided by an embodiment of the present application.
- the system may include a first device 410 , and may further include a second device 420 associated with the first device 410 and a distributed data interaction channel 430 for data interaction between the first device 410 and the second device 420 .
- the association between the first device 410 and the second device 420 may refer to that the first device 410 and the second device 420 are or can be connected through communication.
- the first device 410 and the second device 420 may be devices currently connected through short-range communication technology.
- the first device 410 and the second device 420 may be devices corresponding to the same user identifier.
- the first device 410 may be a tablet computer of user A
- the second device 420 may be a mobile phone of user A.
- the first device 410 may include an application layer 411 and a system layer 412 .
- the application layer 411 may include a freehand brush engine module 413 .
- the system layer 412 may include a window management service module 414 , a layer synthesis module 415 and a distributed task scheduling module 416 .
- the second device 420 may include an application layer 421 and a system layer 422 .
- the application layer 421 may include a freehand brush engine module 423 .
- the system layer 422 may include a window management service module 424, a layer compositing module 425 and a distributed task scheduling module 426.
- the above-mentioned hand-drawn brush engine module 413, window management service module 414, layer composition module 415 and distributed task scheduling module 416 can be respectively connected with the aforementioned hand-painted brush engine module 213 in the electronic device 100 in Fig. 2 , window management service module 221, layer composition module 222 and distributed task scheduling module 223 are similar or identical; above-mentioned hand-painted brush engine module 423, window management service module 424, layer composition module 425 and distributed task scheduling module 426, They may be similar or the same as the hand-painted brush engine module 213, the window management service module 221, the layer synthesis module 222 and the distributed task scheduling module 223 in the aforementioned electronic device 100 in FIG. 2 .
- the hand-painted brush engine module in the first device 410 and/or the second device 420 can be omitted, and, in In the case where the first device 410 does not need to acquire image features from the second device 420, the distributed task scheduling module 416 in the first device 410 may also be omitted.
- the application layer of the first device 410 and/or the second device 420 may also include at least one of a built-in color palette and a built-in texture image library.
- the first device 410 can acquire the first feature, where the first feature is a feature of the first image of the target device, and the target device can be the first
- a device 410 may also be a second device 420, and the first image may be at least a part of the image currently displayed on the target device. Since the content of the screen displayed on the target device comes from a wide range of sources, it may be the interface of an application program on the target device, or it may be the superimposition of the interfaces of multiple application programs on the target device. A frame in a video, or a list of multiple photos in an album.
- the first image as a part of the screen, will not be limited by an application program in the first device 410 or by the first device 410 itself, and will not be related to the built-in palette or built-in texture storehouse of the graphics and text editing program.
- the first features that may be included in the first image are extremely flexible and diverse, thus greatly improving the flexibility and diversity of acquiring image features.
- the user may open a favorite photo on the display screen of the first device 410, so that the picture currently displayed on the display screen includes the photo, and then obtain the first image and obtain the first feature from the first image, that is,
- the image features can be quickly obtained from the user's favorite images, which can fully meet the user's needs.
- FIG. 5 is a flowchart of a method for acquiring image features provided by an embodiment of the present application. It should be noted that the method is not limited to the specific order in FIG. 5 and described below. It should be understood that in other embodiments, the order of some steps in the method can be exchanged according to actual needs, or some steps in the method can be exchanged according to actual needs. It can also be omitted or deleted. This method can be used in the interaction between the first device or the first device and the second device as shown in Figure 4, including the following steps:
- the first device receives a first acquisition instruction.
- the first obtaining instruction is used to instruct the first device to obtain image features.
- the image feature may be a visual feature of the image, and the image feature may be used to edit the text or the image.
- the text or image can be made to have the image feature.
- the feature types of image features may include color types and texture types.
- the feature types of image features may also include other feature types, such as at least one of shape type and spatial relationship type.
- the first device may provide the user with a control for triggering the acquisition of image features through a human-computer interaction interface, and receive a first acquisition instruction submitted by the user based on the control.
- the display screen of the first device is a touch screen, and the user can interact with the first device by clicking or sliding on the screen with a finger or a stylus.
- the lower left corner of the first device includes an "acquire image features" button, and when the first device receives a click operation based on the button, it can be determined that the first acquisition instruction is received.
- the first device may receive a third setting instruction submitted by the user, and the third setting instruction is used to indicate the target feature type.
- the target feature type may include a color type or a texture type. In some other embodiments, the target feature type may also be carried in the first acquisition instruction.
- the first device When the first device receives a click operation based on the "acquire image feature" in Figure 6, it can continue to provide the user with a secondary menu for determining the feature type, as shown in Figure 7, the secondary menu includes a variety of For the selected feature type, when the first device receives the user's click operation based on any feature type, it determines that the feature type is the target feature type selected by the user.
- the first device in order to enable the user to obtain image features from other electronic devices and apply them to the first device, further improving the range and flexibility of image feature acquisition, the first device can receive the first setting instruction or the second setting instruction submitted by the user.
- Setting instructions wherein the first setting instruction may carry the device identifier of the first device, which is used to indicate that the target device for acquiring image features is the first device, and the second setting instruction may carry the device identifier of the second device, which is used to indicate The target device for acquiring image features is the second device.
- the device identifier of the first device or the device identifier of the second device may also be carried in the first acquisition instruction.
- the first device When the first device receives the user's click operation based on "acquire image features" as shown in Figure 6, or determines the target feature type selected by the user based on the secondary menu as shown in Figure 7, it may continue to display the image shown in Figure 8.
- the device selection interface shown in the figure the device selection interface includes at least one device identifier, and when a user's click operation is received based on any device identifier, it can be determined that the electronic device corresponding to the device identifier is the target device.
- the first device when the first device receives a click operation based on the device identifier of the first device, it can be determined that the first setting instruction is received and the first device is the target device; when the first device receives the click operation based on the device identifier of the second device , it can be determined that the second setting instruction is received, and the second device is the target device.
- the second device may be a device associated with the first device.
- the first device may receive other more information for instructing image acquisition.
- Indication information in a characteristic manner, these indication information may be indicated by separate setting instructions, or all of them may be carried in the first acquisition instruction.
- the embodiment of the present application does not specifically limit the manner in which the first device receives the indication information used to indicate the manner of acquiring the image feature.
- the first device judges whether to acquire image features across devices. If yes, execute S506, otherwise execute S503.
- the first device In order to determine whether to obtain image features from the local end or other devices, the first device adopts a corresponding acquisition method. The first device can determine whether to obtain image features across devices.
- the first device may determine that image features do not need to be acquired across devices.
- the first device receives that the device identifier submitted by the user is not the device identifier of the first device, it determines that image features need to be extracted across devices from the second device corresponding to the received device identifier.
- the first device may determine whether the device identifier is carried in the first setting instruction, the second setting instruction or the first obtaining instruction. If the first setting instruction or the first acquisition instruction does not carry any device identifier or the carried device identifier is the device identifier of the first device, it may be determined that there is no need to extract image features across devices. If the second setting instruction or the first acquisition instruction carries a device identifier, and the device identifier is not the device identifier of the first device, it is necessary to extract image features across devices from the second device corresponding to the received device identifier.
- the first device may also be configured to obtain image features only from its own end or only from the second device, so S502 may not be executed, that is, S502 is an optional step.
- the first device creates a first window.
- each window may belong to a different application program. is the window of the painting program, and the right side may be the window of the photo album, so in order to prevent the first device from confusing the operation of obtaining the first image with other operations (such as operations for the photo album), and improve the reliability of obtaining the first image, the first A device can create a first window through the aforementioned window management service module.
- the size of the first window may be the same as the size of the display screen of the first device, and the first window is a transparent window located on the upper layer of other windows displayed on the display screen, that is, the first window is located on the upper layer of all application programs of the first device.
- the global transparent window may be the same as the size of the display screen of the first device, and the first window is a transparent window located on the upper layer of other windows displayed on the display screen, that is, the first window is located on the upper layer of all application programs of the first device.
- the transparency of the first window may be obtained by the first device receiving a submission from a relevant technical person in advance, or may be obtained from a user's submission before creating the first window.
- the transparency of the first window may be 100%.
- the transparency of the first window may also be other values, and this embodiment of the present application does not specifically limit the transparency of the first window.
- a schematic diagram of a display scene may be as shown in FIG. 9 .
- the scene includes a first window 901 on the top layer, the first window is a global transparent window with a transparency of 100%, and the lower layer of the first window is the original display interface of the first device, including the second window 902 and the third window 903, wherein the second window 902 is the window of the drawing program as shown in FIGS. 6-8 , and the third window is the window of the photo album as shown in FIGS. 6-8 .
- the first device may also acquire the first image in other ways, therefore S503 may not be executed, that is, S503 is an optional step.
- the first device acquires a first image.
- the first image may be at least a part of the image currently displayed on the display screen of the first device.
- the first device may acquire the first image from the screen currently displayed on the display screen of the first device based on the first screenshot operation. In some embodiments, when the first device creates the first window, the first device may receive the first screenshot operation based on the first window.
- the user can set the area range of the first image to be acquired through the first screenshot operation.
- the first device may determine the first closed area on the display screen of the first device based on the first screenshot operation, and the image in the first closed area is the first image that the user needs to acquire.
- the first screenshot operation may be used to directly determine the first closed area.
- the first screenshot operation may include a sliding operation.
- the first device may determine the closed area formed by the sliding track of the sliding operation as the first closed area.
- the enclosed area may be the largest enclosed area or the smallest enclosed area formed by the sliding trajectory. That is, the user can flexibly and accurately acquire the first image of any size and any shape by sliding on the display screen.
- the photo on the upper right of the display screen of the first device includes river banks on both sides and people jumping above, and the user draws an irregular first Closed area 1001, the first closed area 1001 includes the river bank on the right side.
- the ends of the slide tracks may be connected to obtain a closed area.
- the first screenshot operation may be used to determine the first position of the first closed area on the display screen, and the preset first border may be used to determine the size and shape of the first closed area.
- the first device may determine the first position based on the first screenshot operation, and determine the area within the first border at the first position as the first closed area. Since the user does not need to draw the first closed area, the difficulty of acquiring the first image can be reduced.
- the first frame is a circular frame with a diameter of 3 cm.
- the photo on the lower right of the display screen of the first device includes a half-body photo of a person.
- the clicked position or the end position of the sliding track is the first position.
- a circular frame with a diameter of 3 cm is generated at a position, and the area within the running frame is the first closed area 1001, which includes a person's head portrait.
- the first frame (including size and shape) may be determined by setting in advance.
- the first device may provide a plurality of different frames to the user in advance, and when a user's selection operation is received based on any frame, the frame is determined as the first frame.
- the embodiment of the present application does not specifically limit the size, shape, and setting manner of the first frame.
- the first screenshot operation may also include other operations, as long as the first closed area can be determined, and the embodiment of the present application does not specifically limit the operation mode of the first screenshot operation .
- the first device may acquire the first image based on the first closed area.
- the first device may capture the screen currently displayed on the display screen of the first device as the second image, and crop the second image based on the first closed area to obtain the first image. That is, the first device may first take a screenshot of the entire screen of the display screen of the first device, and then cut out the first image from the second image obtained by the screenshot according to the first enclosed area.
- the first device may capture the first image in the first enclosed area from the screen currently displayed on the display screen of the first device.
- the first device can determine the picture of at least one window that matches the first closed area based on the positional relationship between each window and the first closed area through the layer synthesis module, and determine the picture of at least one window that matches the first closed area, and according to the upper and lower levels between the at least one window relationship, synthesizing pictures of at least one window into the first image.
- the determination of the first image according to the first closed area may include less image data than the second image, which can make the subsequent analysis of the first feature extraction data less and more accurate, and can improve the acquisition of the first feature. Feature efficiency and accuracy.
- the second image may also be acquired for subsequent acquisition of the first feature.
- the first device may close the first window after acquiring the first image, so that the user can continue to interact with other windows .
- the first device acquires the first feature. Afterwards, the first device may execute S511.
- the first device may analyze and process the first image, so as to extract the first feature. Because the first image is at least part of the image currently displayed on the display screen of the first device, and the screen will not be restricted by a certain application program, correspondingly, the first image will not be restricted by a certain application program in the first device.
- the limitation of the application program makes it possible to obtain the first feature from sources other than the preset image feature library, such as the area outside the interface of the graphic editing application program, etc., thus improving the flexibility and variety of image feature acquisition , which can fully meet the needs of users.
- the operation is simpler.
- the first device acquires the target feature type specified by the user through the third setting instruction or the first acquisition instruction, then the first device can process the first image based on the target feature type, thereby obtaining The first feature of this target feature type.
- the first device may perform type analysis on the color of the first image, and the obtained first feature is a feature of the color type, such as red green blue (red green blue) , RGB) value;
- the feature type carried in the first acquisition instruction is a texture type, then the first device can perform type analysis on the texture of the first image, and the obtained first feature is a feature of the texture type.
- the first image may be processed based on at least one feature type, so as to obtain at least one type of first feature.
- the first device can analyze the first image by means of color histogram, color set, color moment, color aggregation vector or color correlation graph; for the texture type, the first device can use statistical methods, Geometry method, model method or signal processing method, etc., to analyze the first image, or to blur, denoise or add salt value to the first image; for shape features, the first device can use boundary feature method, fuzzy The first image is analyzed by means of Lie shape description method, geometric parameter method or shape invariant moment method; for the spatial relationship type, the first device can divide the first image into multiple image blocks, and then extract each image Block characteristics and build an index.
- the first device may also process the first image in other ways to obtain the first feature.
- the embodiment of the present application does not specifically limit the way to obtain the first feature from the first image.
- the first device sends a first acquisition request to the second device.
- the first device may send a first acquisition request corresponding to the first acquisition specification to the second device through the distributed data interaction channel, thereby requesting the second device to acquire the image feature.
- the target feature type may be carried in the first acquisition request.
- the first device can establish a distributed data interaction channel with the second device, and perform data interaction with the second device through the distributed data interaction channel, including sending a first acquisition request to the second device and subsequently receiving The data fed back by the second device.
- the second device when the second device receives the first acquisition request sent by the second device, it may display first notification information, where the first notification information is used to notify that the first acquisition request of the first device is about to be responded.
- the second device when it receives the first acquisition request, it may display an interface as shown in FIG. and reject button. If the user's click operation is received based on the accept button, the steps described below may proceed. If a user's click operation is received based on the reject button, subsequent operations may be stopped.
- the first device may also send a second acquisition request to the second device, and the second acquisition requests the second device to acquire an image for acquiring image features.
- the second device creates the first window.
- the manner in which the second device creates the first window may be the same as the manner in which the first device creates the first window in S503 , which will not be repeated here.
- the second device acquires the first image.
- the manner in which the second device acquires the first image may be the same as the manner in which the first device acquires the first image in S504, which will not be repeated here.
- the second device acquires the first feature.
- the manner in which the second device acquires the first feature may be the same as the manner in which the first device acquires the first feature in S505 , which will not be repeated here.
- the second device sends the first feature to the first device.
- the first device may execute S511.
- the second device may send the first characteristic to the first device based on the aforementioned distributed data interaction channel.
- S509 may not be executed, and the first image is fed back to the first device in S510.
- the first device may execute the aforementioned S505 when receiving the first image, so as to extract the first feature.
- the first device can obtain the first feature from the second device, because the first image is at least part of the image currently displayed on the display screen of the second device, and the image will not be affected by an application Restricted by the program, correspondingly, the first image will not be restricted by the first device itself, so that the first device can obtain the first feature from a second device other than the first device, further improving the accuracy of obtaining image features.
- Flexibility and diversity can fully meet user needs. For example, a user can apply the color or texture of a photo in the camera roll of the mobile phone to the drawing program of the tablet computer.
- the first device performs an image-text editing operation based on the first feature.
- the first device may perform a graphic editing operation based on the first feature in the graphic editing program, so as to apply the first feature to a new text or image, so that the operated object has the first feature.
- the first device may bind the first feature to the stylus. If the first device detects the drawing operation of the stylus, the image feature of the text or image drawn by the drawing operation is set as the first feature.
- the first device can bind the RGB value to the stylus, and when the user draws with the stylus, the color of the drawn track is the color indicated by the RGB value .
- the first feature is a texture feature, and the first device can also bind the texture feature to the stylus. When the user draws with the stylus, the texture feature of the drawn track is the texture feature of the stylus. The bound texture feature.
- the first device or the second device may not obtain the first feature from the first image, but directly copy the first image to the graphic editing program.
- the first device may not execute S511 to immediately apply the first feature, that is, S511 is an optional step.
- the first device may add the obtained first feature to the image feature library such as the built-in palette or the built-in texture image library, so that the user can directly select from the built-in palette or the built-in texture image library next time.
- An image feature library such as a texture image library to obtain the first feature.
- the first device may also provide the second feature to the third device associated with the first device in a manner similar to that of the second device.
- the first device may receive a third acquisition request from the third device, where the third acquisition request is used to request to acquire image features from the first device.
- the first device acquires a third image, and the third image may be at least part of the image currently displayed on the display screen of the first device, and the first device extracts the second feature from the third image, and feeds back to the third device Second feature.
- the first device may receive a fourth acquisition request from the third device, where the fourth acquisition request is used to request to acquire an image for acquiring image features from the first device.
- the first device acquires the third image, and feeds back the third image to the third device.
- the third device extracts the second feature from the third image.
- the first device may acquire the first feature, where the first feature is a feature of the first image of the target device.
- the target device may be the first device, or may be a second device associated with the first device, and the first image may be at least part of an image currently displayed on a display screen of the target device. Since the content of this screen comes from a wide range of sources, it may be the interface of an application program on the target device, or it may be the superposition of multiple application program interfaces on the target device.
- the screen may be a frame of a video being played , it may also be that the album includes a list of multiple photos, therefore, the first image will not be limited by a certain application program or the first device itself, and the first features that the first image may include are also extremely flexible and diverse, Therefore, the flexibility and diversity of acquiring image features are greatly improved, and the needs of users can be fully met.
- FIG. 13 is a schematic structural diagram of an electronic device 1300 provided by the embodiment of the present application.
- the electronic device provided by this embodiment includes: a memory 1310 and a processor 1320, the memory 1310 is used to store computer programs; the processor 1320 uses The methods described in the above method embodiments are executed when the computer program is called.
- the electronic device provided in this embodiment can execute the foregoing method embodiment, and its implementation principle and technical effect are similar, and details are not repeated here.
- an embodiment of the present application also provides a chip system.
- the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory, so as to implement the methods described in the above method embodiments.
- the chip system may be a single chip, or a chip module composed of multiple chips.
- the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method described in the foregoing method embodiment is implemented.
- An embodiment of the present application further provides a computer program product, which, when the computer program product is run on an electronic device, enables the electronic device to implement the method described in the foregoing method embodiments.
- the above integrated units are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, all or part of the procedures in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
- the computer program When executed by a processor, the steps in the above-mentioned various method embodiments can be realized.
- the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
- the computer-readable storage medium may at least include: any entity or device capable of carrying computer program codes to the photographing device/terminal device, recording medium, computer memory, read-only memory (read-only memory, ROM), random access Memory (random access memory, RAM), electrical carrier signals, telecommunication signals, and software distribution media.
- computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.
- the disclosed device/device and method can be implemented in other ways.
- the device/device embodiments described above are only illustrative.
- the division of the modules or units is only a logical function division.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- the term “if” may be construed, depending on the context, as “when” or “once” or “in response to determining” or “in response to detecting “.
- the phrase “if determined” or “if [the described condition or event] is detected” may be construed, depending on the context, to mean “once determined” or “in response to the determination” or “once detected [the described condition or event] ]” or “in response to detection of [described condition or event]”.
- references to "one embodiment” or “some embodiments” or the like in the specification of the present application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
- appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
- the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (15)
- 一种获取图像特征的方法,其特征在于,包括:第一设备接收第一获取指令,所述第一获取指令用于指示所述第一设备获取图像特征;所述第一设备响应所述第一获取指令,获取第一特征,所述第一特征为目标设备的第一图像的特征,所述目标设备为所述第一设备或与所述第一设备关联的第二设备,所述第一图像为所述目标设备的显示屏当前显示的画面中的至少部分图像。
- 根据权利要求1所述的方法,其特征在于,所述目标设备为所述第一设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像;所述第一设备从所述第一图像提取所述第一特征。
- 根据权利要求2所述的方法,其特征在于,在所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像之前,所述方法还包括:所述第一设备基于所述第一获取指令,创建第一窗口,所述第一窗口的尺寸与所述第一设备的显示屏的尺寸相同,且所述第一窗口为位于所述显示屏所显示的其他窗口上层的透明窗口;所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像,包括:所述第一设备若基于所述第一窗口接收到所述第一截图操作,则从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像;在所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像之后,所述方法还包括:所述第一设备关闭所述第一窗口。
- 根据权利要求2或3所述的方法,其特征在于,所述第一设备基于第一截图操作,从所述第一设备的显示屏当前显示的所述画面中获取所述第一图像,包括:所述第一设备基于所述第一截图操作,在所述第一设备的显示屏确定第一封闭区域;所述第一设备基于所述第一封闭区域获取所述第一图像。
- 根据权利要求4所述的方法,其特征在于,所述第一设备基于所述第一截图操作,在所述第一设备的显示屏确定第一封闭区域,包括:所述第一设备基于所述第一截图操作,确定第一位置,将处于所述第一位置处的第一边框内的区域确定为所述第一封闭区域,所述第一边框为预设的边框;或,所述第一截图操作为滑动操作,所述第一设备将由所述滑动操作的滑动轨迹构成的封闭区域,确定为所述第一封闭区域。
- 根据权利要求4或5所述的方法,其特征在于,所述第一设备基于所述第一封闭区域获取所述第一图像,包括:所述第一设备从所述第一设备的显示屏当前显示的所述画面截取所述第一封闭区 域中的所述第一图像;或,所述第一设备截取所述第一设备的显示屏当前显示的所述画面作为第二图像,并基于所述第一封闭区域对所述第二图像裁剪得到所述第一图像。
- 根据权利要求2-6任一所述的方法,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:所述第一设备接收第一设置指令,所述第一设置指令用于指示所述目标设备为所述第一设备。
- 根据权利要求1所述的方法,其特征在于,所述目标设备为所述第二设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:所述第一设备向所述第二设备发送第一获取请求,所述第一获取请求与所述第一获取指令对应,所述第一获取请求用于请求从所述第二设备获取图像特征;所述第一设备接收所述第二设备反馈的所述第一特征。
- 根据权利要求1所述的方法,其特征在于,所述目标设备为所述第二设备,所述第一设备响应所述第一获取指令,获取第一特征,包括:所述第一设备向所述第二设备发送第二获取请求,所述第二获取请求与所述第一获取指令对应,所述第二获取请求用于请求从所述第二设备获取图像;所述第一设备接收所述第二设备反馈的所述第一图像;所述第一设备从所述第一图像提取所述第一特征。
- 根据权利要求8或9所述的方法,其特征在于,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:所述第一设备接收第二设置指令,所述第二设置指令用于指示所述目标设备为所述第二设备。
- 根据权利要求1-10任一所述的方法,其特征在于,在所述第一设备响应所述第一获取指令,获取第一特征之前,所述方法还包括:所述第一设备接收第三设置指令,所述第三设置指令用于指示获取图像特征的目标特征类型;所述第一设备响应所述第一获取指令,获取第一特征,包括:所述第一设备基于所述目标特征类型,获取所述第一特征。
- 根据权利要求11所述的方法,其特征在于,所述目标特征类型包括色彩类型或纹理类型。
- 一种获取图像特征的方法,其特征在于,包括:第二设备接收第一设备发送的第一获取请求,所述第一获取请求用于请求从所述第二设备获取图像特征;所述第二设备获取第一图像,所述第一图像为所述第二设备的显示屏当前显示的画面中的至少部分图像;所述第二设备从所述第一图像提取第一特征;所述第二设备向所述第一设备反馈所述第一特征。
- 一种电子设备,其特征在于,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器用于在调用所述计算机程序时执行如权利要求1-12任一项或如 权利要求13所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-12任一项或如权利要求13所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22827120.1A EP4343520A1 (en) | 2021-06-25 | 2022-04-06 | Image feature obtaining method and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110713551.2A CN115525183A (zh) | 2021-06-25 | 2021-06-25 | 获取图像特征的方法及电子设备 |
CN202110713551.2 | 2021-06-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022267617A1 true WO2022267617A1 (zh) | 2022-12-29 |
Family
ID=84545203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/085325 WO2022267617A1 (zh) | 2021-06-25 | 2022-04-06 | 获取图像特征的方法及电子设备 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4343520A1 (zh) |
CN (1) | CN115525183A (zh) |
WO (1) | WO2022267617A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015043382A1 (zh) * | 2013-09-30 | 2015-04-02 | 北京奇虎科技有限公司 | 一种适用于触屏设备的截图装置和方法 |
CN105242920A (zh) * | 2015-09-21 | 2016-01-13 | 联想(北京)有限公司 | 一种截图系统、截图方法以及电子设备 |
CN106033305A (zh) * | 2015-03-20 | 2016-10-19 | 广州金山移动科技有限公司 | 一种屏幕取色方法及装置 |
US20170315772A1 (en) * | 2014-11-05 | 2017-11-02 | Lg Electronics Inc. | Image output device, mobile terminal, and control method therefor |
CN109299310A (zh) * | 2018-12-05 | 2019-02-01 | 王相军 | 一种屏幕图像取色和搜索方法及系统 |
CN111596848A (zh) * | 2020-05-09 | 2020-08-28 | 远光软件股份有限公司 | 一种界面取色方法、装置、设备及存储介质 |
-
2021
- 2021-06-25 CN CN202110713551.2A patent/CN115525183A/zh active Pending
-
2022
- 2022-04-06 EP EP22827120.1A patent/EP4343520A1/en active Pending
- 2022-04-06 WO PCT/CN2022/085325 patent/WO2022267617A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015043382A1 (zh) * | 2013-09-30 | 2015-04-02 | 北京奇虎科技有限公司 | 一种适用于触屏设备的截图装置和方法 |
US20170315772A1 (en) * | 2014-11-05 | 2017-11-02 | Lg Electronics Inc. | Image output device, mobile terminal, and control method therefor |
CN106033305A (zh) * | 2015-03-20 | 2016-10-19 | 广州金山移动科技有限公司 | 一种屏幕取色方法及装置 |
CN105242920A (zh) * | 2015-09-21 | 2016-01-13 | 联想(北京)有限公司 | 一种截图系统、截图方法以及电子设备 |
CN109299310A (zh) * | 2018-12-05 | 2019-02-01 | 王相军 | 一种屏幕图像取色和搜索方法及系统 |
CN111596848A (zh) * | 2020-05-09 | 2020-08-28 | 远光软件股份有限公司 | 一种界面取色方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN115525183A (zh) | 2022-12-27 |
EP4343520A1 (en) | 2024-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11922005B2 (en) | Screen capture method and related device | |
KR102135215B1 (ko) | 정보 처리 방법 및 단말 | |
US20210294429A1 (en) | Apparatus, method and recording medium for controlling user interface using input image | |
EP3547218B1 (en) | File processing device and method, and graphical user interface | |
CN111240673B (zh) | 互动图形作品生成方法、装置、终端及存储介质 | |
EP3195601B1 (en) | Method of providing visual sound image and electronic device implementing the same | |
EP3693837A1 (en) | Method and apparatus for processing multiple inputs | |
CN107368810A (zh) | 人脸检测方法及装置 | |
CN114115619A (zh) | 一种应用程序界面显示的方法及电子设备 | |
US20230367464A1 (en) | Multi-Application Interaction Method | |
WO2021169466A1 (zh) | 信息收藏方法、电子设备及计算机可读存储介质 | |
US20240193203A1 (en) | Presentation Features for Performing Operations and Selecting Content | |
CN116095413B (zh) | 视频处理方法及电子设备 | |
WO2023236794A1 (zh) | 一种音轨标记方法及电子设备 | |
US9195310B2 (en) | Camera cursor system | |
CN109725806A (zh) | 站点编辑方法及装置 | |
CN115700461A (zh) | 投屏场景下的跨设备手写输入方法、系统和电子设备 | |
WO2022267617A1 (zh) | 获取图像特征的方法及电子设备 | |
KR102076629B1 (ko) | 휴대 장치에 의해 촬영된 이미지들을 편집하는 방법 및 이를 위한 휴대 장치 | |
CN111626233B (zh) | 一种关键点标注方法、系统、机器可读介质及设备 | |
CN107885571A (zh) | 显示页面控制方法及装置 | |
WO2024125301A1 (zh) | 显示方法和电子设备 | |
CN116095412B (zh) | 视频处理方法及电子设备 | |
AU2015255305B2 (en) | Facilitating image capture and image review by visually impaired users | |
KR20210101183A (ko) | 입력 영상을 이용한 사용자 인터페이스 제어 방법, 장치 및 기록매체 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22827120 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022827120 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18572791 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2022827120 Country of ref document: EP Effective date: 20231219 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |