CN109151320B - Target object selection method and device - Google Patents

Target object selection method and device Download PDF

Info

Publication number
CN109151320B
CN109151320B CN201811150282.8A CN201811150282A CN109151320B CN 109151320 B CN109151320 B CN 109151320B CN 201811150282 A CN201811150282 A CN 201811150282A CN 109151320 B CN109151320 B CN 109151320B
Authority
CN
China
Prior art keywords
user
target object
target
determining
relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811150282.8A
Other languages
Chinese (zh)
Other versions
CN109151320A (en
Inventor
陈宏星
李斌
李辉
周天飞
臧晨迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811150282.8A priority Critical patent/CN109151320B/en
Publication of CN109151320A publication Critical patent/CN109151320A/en
Application granted granted Critical
Publication of CN109151320B publication Critical patent/CN109151320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The invention provides a target object selection method and a target object selection device, which are used for determining user expected object information indicated by a first operation in response to the first operation under the condition of acquiring the first operation aiming at a target object in a first image, selecting the target object from various objects to be selected in the first image based on the user expected object information indicated by the first operation, realizing the selection of the target object based on the user expected object information, and improving the matching degree of the target object and the user expected object, so that the target object consistent with the user expected object can be acquired at one time based on the user expected object information, repeated photographing is not needed, the target object is selected, and the target object selection efficiency is improved.

Description

Target object selection method and device
Technical Field
The present invention belongs to the field of information processing technologies, and in particular, to a target object selection method and apparatus.
Background
The current process of automatically selecting a target object from an image by a terminal is as follows: the terminal obtains a plurality of objects included in the image through edge detection of the image, and selects and takes out a target object from the plurality of objects through a preset target object selection mode in an AI (Artificial intelligence) algorithm.
However, the target object selected by the AI algorithm may not be the user desired object (i.e., the object that the user actually wants to select), and in order to be able to select the user desired object as the target object, the shooting area of the terminal needs to be adjusted so that the target object selected by the AI algorithm is outside the shooting area, so that the AI algorithm can select the target object from the image shot by the shooting area, but this way needs to repeatedly adjust the shooting area to possibly select the user desired object as the target object.
Disclosure of Invention
In view of this, the present invention provides a target object selection method and device, which are used for selecting a target object based on user desired object information indicated by a first operation, so as to improve a matching degree between the target object and the user desired object. The technical scheme is as follows:
the invention provides a target object selection method, which comprises the following steps:
acquiring a first operation aiming at a target object in a first image;
in response to the first operation, determining user desired object information indicated by the first operation;
and selecting a target object from various objects to be selected of the first image based on the user desired object information indicated by the first operation.
Preferably, the user desired object information includes: attribute requirements of the user-desired object;
the selecting a target object from the objects to be selected of the first image based on the user desired object information indicated by the first operation comprises:
determining at least one object to be selected which meets the attribute requirement of the object expected by the user from the objects to be selected;
and selecting one object from the at least one object to be selected as a target object.
Preferably, the determining at least one object to be selected which meets the attribute requirement of the object desired by the user from the objects to be selected includes:
and determining at least one object to be selected which meets at least one of the requirements of the size of the object desired by the user, the height and width of the object desired by the user, the position of the object desired by the user and the display effect of the object desired by the user from the various objects to be selected.
Preferably, the user desired object information includes: a first relationship between the target object and a user desired object;
the selecting a target object from the objects to be selected of the first image based on the user desired object information indicated by the first operation comprises:
determining a second relationship between each object to be selected and the target object;
screening at least one object to be selected with the second relation matched with the first relation from each object to be selected;
and selecting one object from the at least one object to be selected, and replacing the target object with the selected object.
Preferably, the first relationship between the target object and the user desired object includes: at least one of a size relationship, a positional relationship, and a display effect of one of the target object and the user desired object with respect to the other;
the screening of the at least one object to be selected, of which the second relationship is matched with the first relationship, from the objects to be selected includes: and screening at least one object to be selected with the second relation identical to the first relation from the objects to be selected.
Preferably, the method further comprises: in response to the first operation, determining a target object selected before the first operation as a non-user-desired object;
or
Obtaining a second operation for the target object in the first image;
and in response to the second operation, determining the target object selected before the second operation as a non-user-desired object.
The invention also provides a target object selecting device, which comprises:
an acquisition unit configured to acquire a first operation for a target object in a first image;
a determination unit configured to determine, in response to the first operation, user-desired-object information indicated by the first operation;
and the selecting unit is used for selecting a target object from each object to be selected of the first image based on the user expected object information indicated by the first operation instruction.
Preferably, the user desired object information includes: attribute requirements of the user-desired object; the selecting unit is used for determining at least one object to be selected which meets the attribute requirement of the object expected by the user from the objects to be selected, and selecting one object from the at least one object to be selected as a target object;
or
The user desired object information includes: a first relationship between the target object and a user desired object; the selecting unit is configured to determine a second relationship between each object to be selected and the target object, screen at least one object to be selected from the objects to be selected, the second relationship of which matches the first relationship, select one object from the at least one object to be selected, and replace the target object with the selected object.
The invention also provides a terminal, which comprises a processor and a memory;
the processor is used for acquiring a first operation aiming at a target object in a first image, determining user desired object information indicated by the first operation in response to the first operation, and selecting the target object from various objects to be selected in the first image based on the user desired object information indicated by the first operation;
the memory is used for storing each object to be selected and/or the target object.
The invention also provides a storage medium, wherein the storage medium is stored with computer program codes, and when the computer program codes are executed, the target object selection method is realized.
According to the technical scheme, under the condition that the first operation aiming at the target object in the first image is obtained, the user expected object information indicated by the first operation is determined in response to the first operation, the target object is selected from the objects to be selected in the first image based on the user expected object information indicated by the first operation, the target object is selected based on the user expected object information, the matching degree of the target object and the user expected object is improved, the target object consistent with the user expected object can be obtained at one time based on the user expected object information, repeated photographing is not needed, the target object is selected, and the target object selection efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a target object selection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a terminal pose provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of another terminal pose provided by embodiments of the present invention;
fig. 4 is a schematic structural diagram of a target object selecting apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of a target object selecting method according to an embodiment of the present invention is shown, where the method is used to improve a matching degree between a target object and a user desired object, and includes the following steps:
101: a first operation is acquired for a target object in a first image.
The first operation is used for indicating that the target object is selected from the first image, and the first operation can be used for two cases: one case is to indicate that the target object is directly selected based on the first operation in the case where the target object is not selected in the first image, and the other case is to select the target object again based on the first operation in the case where the target object has been selected in the first image, and the re-selected target object and the selected target object may be output simultaneously or may be replaced by the re-selected target object so that only one target object is output at a time, and the selected target object may be the target object selected based on some first operation or other means (such as an AI algorithm).
The first operation may be any operation that can be regarded as a target object, for example, the first operation may be any one of a key (e.g., pressing a volume up key or a volume down key), a sliding operation (e.g., sliding up and down or sliding left and right), a zooming operation, a hover gesture, and a terminal posture (e.g., the terminal is in a horizontal state or the terminal is in a vertical state). If the target object is selected by multiple first operations, the multiple first operations may be the same first operation or at least part of the first operations are different, for example, the multiple first operations may be at least one of the above-mentioned key (e.g., pressing a volume up key or a volume down key), a sliding operation (e.g., sliding up and down or sliding left and right), a zooming operation, a hover gesture, and a terminal gesture (e.g., the terminal is in a horizontal state or the terminal is in a vertical state), which can be regarded as an operation for selecting the target object, so that the target object is selected by the multiple first operations.
102: in response to the first operation, user desired object information indicated by the first operation is determined.
It can be understood that: in order to match the target object with the user desired object, some requirements of the user desired object need to be determined when the target object is selected, and the first operation is used for instructing to select the target object from the first image, so that in order to facilitate selecting the target object matched with the user desired object based on the first operation, the user desired object information may be indicated by the first operation.
The user desired object information is used to indicate a requirement of the user desired object, such as at least one of a position requirement of the user desired object, a size requirement of the user desired object, a width requirement of the user desired object, and a display effect of the user desired object, which may be a requirement of the user desired object itself or a requirement of the user desired object relative to a point in the first image, such as any one of a center point, an edge point, and a selected target object in the first image.
For example, in the case of a first image having a selected target object, the requirement may be considered to be, but is not limited to, a requirement that the user desires the object relative to the selected target object, such as a requirement that the position of the user desires the object to indicate which side of the selected target object the user desires the object to be located on, such as by a requirement that the position of the user desires the object to indicate that the user desires the object to be located on the left side of the selected target object; yet another example is where the size requirement of the user-desired object indicates that the size of the user-desired object is smaller or larger than the size of the selected target object.
In the case that the first image does not select the target object, the requirement may be, but is not limited to, a requirement for the user to expect the object itself, for example, the width and height requirement of the user to expect the object may be a relative relationship between the width and height of the user to expect the object, for example, the width of the user to expect the object is greater than the height of the user to expect the object; also for example, the display effect of the user desired object may be that the display effect of the user desired object is different from other objects in the first image, such as but not limited to: at least one of brightness, color, definition, etc., if the definition of the object desired by the user is greater than that of other objects, or the display effect of the object desired by the user may be a certain effect, if the color of the object desired by the user is more yellow, the embodiment is not limited to the above specific content.
The user desired object information may further indicate an attribute value corresponding to the requirement, the attribute value may be determined based on the operation data of the first operation, taking the position requirement of the user desired object as an example, the user desired object information may further indicate a specific position of the user desired object in the first image, such as a distance between the user desired object and a reference point of the first image, the distance between the user desired object and the reference point of the first image may be determined based on a sliding distance (a kind of operation data) considered as a leftward sliding of the first operation, such as a corresponding relationship between a preset sliding distance and a distance between the user desired object and the reference point of the first image, and if the sliding distance is obtained, the distance between the user desired object and a point of the first image may be determined based on the corresponding relationship, where the point of the first image may be a center point or an edge point of the first image, and the like, this embodiment is not limited; of course, in the case where the first image has selected the target object, the user desired object information may also indicate the distance between the user desired object and the selected target object.
As another example, for the size requirement of the user desired object, the user desired object information may also indicate a size difference between the user desired object and the selected target object, or indicate the size of the user desired object itself, such as a size value (e.g., a width value) or a size range.
The user desired object information is described below with reference to a specific first operation, for example, the first operation may be an operation of adjusting the terminal posture, and the terminal posture may be adjusted to be in a horizontal state or a vertical state by the first operation. As shown in fig. 2, the terminal is in a portrait state in which the user desired object information determined based on the first operation may indicate that the user desired object is greater in height than in width, and fig. 3 shows the terminal is in a landscape state in which the user desired object information determined based on the first operation may indicate that the user desired object is greater in width than in height.
The first operation of adjusting the terminal posture can be triggered when the terminal shoots the first image, if the terminal shoots the first image, the terminal posture is adjusted, so that the information of the user expected object can be obtained when the first image is shot, if the terminal is in a transverse state when the transverse screen shoots the first image, the corresponding information of the user expected object indicates that the width of the user expected object is larger than the height, if the terminal is in a vertical state when the vertical screen shoots the first image, the corresponding information of the user expected object indicates that the height of the user expected object is larger than the width, therefore, how to shoot the first image can be determined based on the requirement of the user expected object, and the information of the user expected object can be obtained when the first image is shot. If the user desired object information indicates that the height and the width of the user desired object are consistent (same), the terminal may be adjusted to a certain angle for shooting, or any one of the portrait screen shooting and the landscape screen shooting includes the condition that the height and the width of the user desired object are consistent, or of course, other manners, such as, but not limited to, a manner of not carrying the user desired object information, may also be adopted, so as to indicate that the height and the width of the user desired object are consistent.
If the first operation is a key operation, such as an operation of pressing a volume key, and the first operation of pressing a volume up key is obtained when the target object is selected in the first image, the determined user desired object information may indicate that the size of the user desired object is larger than the size of the selected target object; the determined user desired object information may indicate that the size of the user desired object is smaller than the size of the selected target object if the first operation of pressing the volume down key is acquired.
103: and selecting a target object from various objects to be selected in the first image based on the user desired object information indicated by the first operation, so that the selected target object is related to the user desired object information, and the matching degree of the target object and the user desired object is improved. The manner of selecting the target object in the present embodiment is as follows, but is not limited to the following manner:
one way is as follows: the user desired object information includes: the attribute requirement of the user expected object, and the corresponding process of selecting the target object is as follows: and determining at least one object to be selected which meets the attribute requirement of the object expected by the user from all the objects to be selected, and selecting one object from the at least one object to be selected as a target object.
That is, at least one object to be selected which meets the attribute requirement of the object desired by the user is determined, and one object is selected from the determined at least one object to be selected as the target object. If the target object is selected before the target object, the target object can be replaced by the previously selected target object, or the target object and the previously selected target object are both allowed to be output as the target object; and if the target object is not selected before the target object, directly outputting the selected object as the target object.
In this embodiment, the attribute requirements of the user desired object may include: for the description of the requirements, please refer to the above description, this embodiment is not further described, and the corresponding at least one object to be selected that is determined to meet the attribute requirement of the user-desired object is: and determining the object to be selected which meets at least one of the attribute requirements.
For example: and determining at least one object to be selected with the width larger than the height from each object to be selected if the attribute requirement of the object expected by the user is the height and width requirement of the object expected by the user and the height and width requirement of the object expected by the user is that the width of the object expected by the user is larger than the height. Another example is: the attribute requirement of the user desired object is a position requirement of the user desired object, and the position requirement of the user desired object is that the user desired object is in a left region of the first image, at least one object to be selected located in the left region is selected from the objects to be selected, details are not described in this embodiment for dividing the region in the first image, and if the position requirement of the user desired object is that the user desired object is located on the left side of the selected target object (or the selected target object is located on the left side of the user desired object), at least one object to be selected located on the left side of the selected target object is selected from the objects to be selected.
If the attribute requirements of the object desired by the user include at least two requirements, at least one object to be selected meeting any one of the at least two requirements needs to be determined from the objects to be selected, and if the attribute requirements of the object desired by the user include: the size requirement of the object expected by the user and the display effect of the object expected by the user are both required, and the determined object to be selected needs to simultaneously meet the two requirements.
In another mode: the user desired object information includes: the first relation between the target object and the user expected object, and the corresponding process of selecting the target object is as follows: determining a second relation between each object to be selected and the target object, screening at least one object to be selected with the second relation matched with the first relation from each object to be selected, selecting one object from the at least one object to be selected, and replacing the target object with the selected object.
That is, at least one object to be selected is screened from the objects to be selected, wherein the object to be selected and the second relationship of the object to be selected match (e.g., are the same as) the first relationship, based on the first relationship between the object and the object desired by the user, and then one object is selected from the screened objects to be selected to replace the object, for example, the object is replaced with the selected object.
In this embodiment, the first relationship between the target object and the user desired object includes: for example, the size relationship, the position relationship, and the display effect may also indicate corresponding attribute values, and for example, the size relationship may also indicate a size difference between one of the user desired object and the target object and another one of the user desired object and the target object, for example, indicate how much or little the user desired object is with respect to the target object, for example, refer to the above description, which is not described again in this embodiment.
Taking the size relationship of the user desired object relative to the target object as an example, the size relationship of the user desired object relative to the target object is as follows: when the size of the object desired by the user is larger than that of the target object, the object to be selected having a size larger than that of the target object needs to be screened from the objects to be selected, and then one object is selected from the screened objects to be selected. Similarly, if the first relationship includes at least two relationships, the screened object to be selected and the target object should satisfy the at least two relationships at the same time.
In addition, in this embodiment, the two manners may be combined to select the target object, for example, in the process of selecting the target object multiple times, some selections are based on the attribute requirement of the user-desired object, and other selections are based on the first relationship between the target object and the user-desired object.
The points to be explained here are: if the target object needs to be selected by means of multiple first operators in the process of selecting the target object, for the first time of obtaining the first operation, the target object can be selected based on the user expected object information indicated by the first operation, and for the other times of obtaining the first operation, each time one first operation selected target object is obtained, the user expected object information indicated by the first operation at this time and the first operation before this time needs to be selected together, taking the ith (i is a natural number greater than 1) first operation in the multiple first operations as an example, and when the ith first operation is obtained, the user expected object information indicated by the first operations from the ith to the first time needs to be selected together. For example, when the target object is selected by the AI algorithm, the target object may be selected by using the user desired object information indicated by the first operation from the i-th operation to the first operation as the input parameter of the AI algorithm.
According to the technical scheme, under the condition that the first operation aiming at the target object in the first image is obtained, the user expected object information indicated by the first operation is determined in response to the first operation, the target object is selected from the objects to be selected in the first image based on the user expected object information indicated by the first operation, the target object is selected based on the user expected object information, the matching degree of the target object and the user expected object is improved, the target object consistent with the user expected object can be obtained at one time based on the user expected object information, repeated photographing is not needed, the target object is selected, and the target object selection efficiency is improved.
In addition, the first operation may have other functions besides the function of instructing to select the target object, for example, the target object selecting method provided in this embodiment may further include: and in response to the first operation, determining the target object selected before the first operation as the non-user-desired object, so that the first operation can also have a function of determining the target object selected before the first operation as the non-user-desired object, thus selecting the target object matched with the user-desired object through the first operation, determining the target object selected before as the non-user-desired object, and realizing triggering two kinds of processing through one operation.
Or in this embodiment, the selected target object may be determined as an undesired object by other operations, for example, the target object selecting method provided in this embodiment may further include: a second operation for the target object in the first image is obtained, and in response to the second operation, the target object that has been selected before the second operation is determined to be an undesired object.
The second operation is an operation different from the first operation, which is obtained before the target object is selected based on the user desired object information indicated by the first operation, for example, the second operation may be, but is not limited to, an operation of shaking the terminal, so that two processes are triggered by the two operations.
After the selected target object is determined as the non-user-desired object through the first operation or the second operation, the non-user-desired object can be excluded when the target object is selected again, and the same target object is prevented from being selected again. In addition, for the non-user-desired object, in the case that the selected target object is determined as the non-user-desired object, the non-user-desired object may be stored in an exclusion list, and all objects in the exclusion list are prohibited from participating in selection of the target object, or the non-user-desired object and the object to be selected are stored in a classified manner, for example, the non-user-desired object and the object to be selected are stored in different storage spaces, so that the target object may be selected from the storage space where the object to be selected is located, and the probability of selection errors is reduced.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present invention is not limited by the illustrated ordering of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides a target object selecting apparatus, where the structure of the target object selecting apparatus is shown in fig. 4, and the target object selecting apparatus may include: an acquisition unit 11, a determination unit 12 and a selection unit 13.
An acquisition unit 11 is configured to acquire a first operation on a target object in a first image.
The first operation is used for indicating that the target object is selected from the first image, and the first operation can be used for two cases: in the case that the target object is not selected in the first image, the target object is indicated to be directly selected based on the first operation, in the other case, the target object is selected again based on the first operation in the case that the target object is selected in the first image, the re-selected target object and the selected target object may be output simultaneously, or the target object selected before is replaced by the re-selected target object, so that only one target object is output at a time, and the selected target object may be the target object selected based on some first operation or other means (such as an AI algorithm), and for the specific form of the first operation, the detailed description of the embodiment of the method is omitted.
A determining unit 12, configured to determine, in response to the first operation, user desired object information indicated by the first operation. The user desired object information is used to indicate a requirement of the user desired object, such as at least one of a position requirement of the user desired object, a size requirement of the user desired object, a width requirement of the user desired object, and a display effect of the user desired object, which may be a requirement of the user desired object itself or a requirement of the user desired object relative to a point in the first image, such as any one of a center point, an edge point, and a selected target object in the first image. For the two cases that the target object is selected in the first image and the target object is not selected in the first image, the information of the object desired by the user may be different, for which, please refer to the relevant description in the embodiment of the method.
The user desired object information may further indicate an attribute value corresponding to the requirement, the attribute value may be determined based on the operation data of the first operation, taking the position requirement of the user desired object as an example, the user desired object information may further indicate a specific position of the user desired object in the first image, such as a distance between the user desired object and a reference point of the first image, the distance between the user desired object and the reference point of the first image may be determined based on a sliding distance (a kind of operation data) considered as a leftward sliding of the first operation, such as a corresponding relationship between a preset sliding distance and a distance between the user desired object and the reference point of the first image, and if the sliding distance is obtained, the distance between the user desired object and a point of the first image may be determined based on the corresponding relationship, where the point of the first image may be a center point or an edge point of the first image, and the like, this embodiment is not limited; of course, in the case where the first image has selected the target object, the user desired object information may also indicate the distance between the user desired object and the selected target object.
As another example, for the size requirement of the user desired object, the user desired object information may also indicate a size difference between the user desired object and the selected target object, or indicate the size of the user desired object itself, such as a size value (e.g., a width value) or a size range. For the illustration of the first operation and the object information desired by the user, please refer to fig. 2 and fig. 3 in conjunction with the explanation of fig. 2 and fig. 3 in the method embodiment, and detailed description of this embodiment is omitted.
The selecting unit 13 is configured to select a target object from each object to be selected in the first image based on the user desired object information indicated by the first operation instruction, so that the selected target object is related to the user desired object information, thereby improving a matching degree between the target object and the user desired object. The manner of selecting the target object in the present embodiment is as follows, but is not limited to the following manner:
one way is as follows: the user desired object information includes: the attribute requirement of the object desired by the user, and the corresponding process of selecting the target object by the selecting unit 13 is as follows: and determining at least one object to be selected which meets the attribute requirement of the object expected by the user from all the objects to be selected, and selecting one object from the at least one object to be selected as a target object.
In this embodiment, the attribute requirements of the user desired object may include: for the description of the requirements, please refer to the above description, this embodiment is not further described, and the corresponding at least one object to be selected that is determined to meet the attribute requirement of the user-desired object is: and determining the object to be selected which meets at least one of the attribute requirements.
For example: and determining at least one object to be selected with the width larger than the height from each object to be selected if the attribute requirement of the object expected by the user is the height and width requirement of the object expected by the user and the height and width requirement of the object expected by the user is that the width of the object expected by the user is larger than the height. If the attribute requirements of the object desired by the user include at least two requirements, at least one object to be selected meeting any one of the at least two requirements needs to be determined from the objects to be selected, and if the attribute requirements of the object desired by the user include: the size requirement of the object expected by the user and the display effect of the object expected by the user are both required, and the determined object to be selected needs to simultaneously meet the two requirements.
In another mode: the user desired object information includes: the first relationship between the target object and the user desired object, and the corresponding selecting unit 13 selects the target object according to the following procedure: determining a second relation between each object to be selected and the target object, screening at least one object to be selected with the second relation matched with the first relation from each object to be selected, selecting one object from the at least one object to be selected, and replacing the target object with the selected object.
In this embodiment, the first relationship between the target object and the user desired object includes: for example, the size relationship, the position relationship, and the display effect may also indicate corresponding attribute values, and for example, the size relationship may also indicate a size difference between one of the user desired object and the target object and another one of the user desired object and the target object, for example, indicate how much or little the user desired object is with respect to the target object, for example, refer to the above description, which is not described again in this embodiment.
Taking the size relationship of the user desired object relative to the target object as an example, the size relationship of the user desired object relative to the target object is as follows: when the size of the object desired by the user is larger than that of the target object, the object to be selected having a size larger than that of the target object needs to be screened from the objects to be selected, and then one object is selected from the screened objects to be selected. Similarly, if the first relationship includes at least two relationships, the screened object to be selected and the target object should satisfy the at least two relationships at the same time.
In addition, in this embodiment, the selecting unit 13 may also combine the two manners to select the target object, for example, some selections are based on the attribute requirement of the user desired object in the process of selecting the target object multiple times, and other selections are based on the first relationship between the target object and the user desired object.
The points to be explained here are: if the target object needs to be selected by means of multiple first operators in the process of selecting the target object, for the first time of obtaining the first operation, the target object can be selected based on the user expected object information indicated by the first operation, and for the other times of obtaining the first operation, each time one first operation selected target object is obtained, the user expected object information indicated by the first operation at this time and the first operation before this time needs to be selected together, taking the ith (i is a natural number greater than 1) first operation in the multiple first operations as an example, and when the ith first operation is obtained, the user expected object information indicated by the first operations from the ith to the first time needs to be selected together. For example, when the target object is selected by the AI algorithm, the target object may be selected by using the user desired object information indicated by the first operation from the i-th operation to the first operation as the input parameter of the AI algorithm.
According to the technical scheme, under the condition that the first operation aiming at the target object in the first image is obtained, the user expected object information indicated by the first operation is determined in response to the first operation, the target object is selected from the objects to be selected in the first image based on the user expected object information indicated by the first operation, the target object is selected based on the user expected object information, the matching degree of the target object and the user expected object is improved, the target object consistent with the user expected object can be obtained at one time based on the user expected object information, repeated photographing is not needed, the target object is selected, and the target object selection efficiency is improved.
In addition, the determining unit 12 is further configured to determine, in response to the first operation, the target object that has been selected before the first operation as the non-user-desired object, or obtain, at the obtaining unit 11, a second operation for the target object in the first image, and determine, by the determining unit 12, the target object that has been selected before the second operation as the non-user-desired object in response to the second operation, so that the non-user-desired object can be excluded when the target object is selected again, and re-selection of the same target object is avoided. In addition, for the non-user-desired object, in the case that the selected target object is determined as the non-user-desired object, the non-user-desired object may be stored in an exclusion list, and all objects in the exclusion list are prohibited from participating in selection of the target object, or the non-user-desired object and the object to be selected are stored in a classified manner, for example, the non-user-desired object and the object to be selected are stored in different storage spaces, so that the target object may be selected from the storage space where the object to be selected is located, and the probability of selection errors is reduced.
In addition, the embodiment of the invention also provides a terminal, which comprises a processor and a memory. The processor is used for acquiring a first operation aiming at a target object in a first image, responding to the first operation, determining user expected object information indicated by the first operation, and selecting the target object from various objects to be selected in the first image based on the user expected object information indicated by the first operation; and the memory is used for storing each object to be selected and/or the target object.
One way for the processor to select the target object in this embodiment is: the user desired object information includes: attribute requirements of the user-desired object; the processor is used for determining at least one object to be selected which meets the attribute requirement of the object expected by the user from all the objects to be selected, and selecting one object from the at least one object to be selected as a target object; the other mode is as follows: the user desired object information includes: a first relationship between the target object and the user desired object; the processor is configured to determine a second relationship between each object to be selected and the target object, screen at least one object to be selected from the objects to be selected, where the second relationship matches the first relationship, select one object from the at least one object to be selected, and replace the target object with the selected object.
The embodiment of the invention also provides a storage medium, wherein the storage medium is stored with computer program codes, and the target object selection method is realized when the computer program codes are executed.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for selecting a target object, the method comprising:
acquiring a first operation aiming at a target object in a first image;
in response to the first operation, determining user-desired-object information indicated by the first operation, wherein the first operation is an operation of adjusting a terminal posture, and in response to the first operation, determining user-desired-object information indicated by the first operation includes: determining the information of the user expected object according to the state of the terminal after the posture adjustment;
and selecting a target object from various objects to be selected of the first image based on the user desired object information indicated by the first operation.
2. The method of claim 1, wherein the user-desired object information comprises: attribute requirements of the user-desired object;
the selecting a target object from the objects to be selected of the first image based on the user desired object information indicated by the first operation comprises:
determining at least one object to be selected which meets the attribute requirement of the object expected by the user from the objects to be selected;
and selecting one object from the at least one object to be selected as a target object.
3. The method according to claim 2, wherein the determining at least one object to be selected which meets the attribute requirement of the user desired object from the objects to be selected comprises:
and determining at least one object to be selected which meets at least one of the requirements of the size of the object desired by the user, the height and width of the object desired by the user, the position of the object desired by the user and the display effect of the object desired by the user from the various objects to be selected.
4. The method of claim 1, wherein the user-desired object information comprises: a first relationship between the target object and a user desired object;
the selecting a target object from the objects to be selected of the first image based on the user desired object information indicated by the first operation comprises:
determining a second relationship between each object to be selected and the target object;
screening at least one object to be selected with the second relation matched with the first relation from each object to be selected;
and selecting one object from the at least one object to be selected, and replacing the target object with the selected object.
5. The method of claim 4, wherein the first relationship between the target object and the user desired object comprises: at least one of a size relationship, a positional relationship, and a display effect of one of the target object and the user desired object with respect to the other;
the screening of the at least one object to be selected, of which the second relationship is matched with the first relationship, from the objects to be selected includes: and screening at least one object to be selected with the second relation identical to the first relation from the objects to be selected.
6. The method of claim 1, further comprising: in response to the first operation, determining a target object selected before the first operation as a non-user-desired object; or
Obtaining a second operation for the target object in the first image;
and in response to the second operation, determining the target object selected before the second operation as a non-user-desired object.
7. A target object selection apparatus, the apparatus comprising:
an acquisition unit configured to acquire a first operation for a target object in a first image;
a determination unit configured to determine, in response to the first operation, user-desired-object information indicated by the first operation, where the first operation is an operation of adjusting a terminal posture, and the determining, in response to the first operation, the user-desired-object information indicated by the first operation includes: determining the information of the user expected object according to the state of the terminal after the posture adjustment;
and the selecting unit is used for selecting a target object from each object to be selected of the first image based on the user expected object information indicated by the first operation instruction.
8. The apparatus of claim 7, wherein the user-desired-object information comprises: attribute requirements of the user-desired object; the selecting unit is used for determining at least one object to be selected which meets the attribute requirement of the object expected by the user from the objects to be selected, and selecting one object from the at least one object to be selected as a target object;
or
The user desired object information includes: a first relationship between the target object and a user desired object; the selecting unit is configured to determine a second relationship between each object to be selected and the target object, screen at least one object to be selected from the objects to be selected, the second relationship of which matches the first relationship, select one object from the at least one object to be selected, and replace the target object with the selected object.
9. A terminal, characterized in that the terminal comprises a processor and a memory;
the processor is configured to acquire a first operation for a target object in a first image, determine, in response to the first operation, user-desired-object information indicated by the first operation, and select, based on the user-desired-object information indicated by the first operation, the target object from among objects to be selected in the first image, where the first operation is an operation of adjusting a terminal posture, and the determining, in response to the first operation, the user-desired-object information indicated by the first operation includes: determining the information of the user expected object according to the state of the terminal after the posture adjustment;
the memory is used for storing each object to be selected and/or the target object.
10. A storage medium having stored thereon computer program code which, when executed, implements a target object selection method as claimed in any one of claims 1 to 6.
CN201811150282.8A 2018-09-29 2018-09-29 Target object selection method and device Active CN109151320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811150282.8A CN109151320B (en) 2018-09-29 2018-09-29 Target object selection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811150282.8A CN109151320B (en) 2018-09-29 2018-09-29 Target object selection method and device

Publications (2)

Publication Number Publication Date
CN109151320A CN109151320A (en) 2019-01-04
CN109151320B true CN109151320B (en) 2022-04-22

Family

ID=64813815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811150282.8A Active CN109151320B (en) 2018-09-29 2018-09-29 Target object selection method and device

Country Status (1)

Country Link
CN (1) CN109151320B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110392211B (en) * 2019-07-22 2021-04-23 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079812A (en) * 2013-03-25 2014-10-01 联想(北京)有限公司 Method and device of acquiring image information
CN104853096A (en) * 2015-04-30 2015-08-19 广东欧珀移动通信有限公司 Rotation camera-based shooting parameter determination method and terminal
CN105075237A (en) * 2013-02-28 2015-11-18 索尼公司 Image processing apparatus, image processing method, and program
CN105302434A (en) * 2015-06-16 2016-02-03 深圳市腾讯计算机系统有限公司 Method and device for locking targets in game scene
CN105549853A (en) * 2015-02-16 2016-05-04 上海逗屋网络科技有限公司 Method and device for determining target operation object on touch terminal
CN105912589A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic device
CN107395986A (en) * 2017-08-28 2017-11-24 联想(北京)有限公司 Image acquiring method, device and electronic equipment
CN108366203A (en) * 2018-03-01 2018-08-03 北京金山安全软件有限公司 Composition method, composition device, electronic equipment and storage medium

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003132361A (en) * 2001-10-29 2003-05-09 Sharp Corp Object selecting device and method
JP5038465B2 (en) * 2010-05-25 2012-10-03 任天堂株式会社 Information processing program, information processing apparatus, information processing method, and information processing system
CN102609167A (en) * 2011-01-25 2012-07-25 联想(北京)有限公司 Electronic equipment and display method thereof
CN102959551B (en) * 2011-04-25 2017-02-08 松下电器(美国)知识产权公司 Image-processing device
CN102866820B (en) * 2011-07-07 2017-12-08 上海聚力传媒技术有限公司 Method, apparatus and equipment for the selection display object in display interface
CN103019610A (en) * 2012-12-31 2013-04-03 北京百度网讯科技有限公司 Object selection method and terminal
CN103309580A (en) * 2013-07-05 2013-09-18 珠海金山办公软件有限公司 Method and device for selecting intended target
CN104427122B (en) * 2013-09-09 2017-08-29 联想(北京)有限公司 Portable electric appts and its object selection method
JP6646936B2 (en) * 2014-03-31 2020-02-14 キヤノン株式会社 Image processing apparatus, control method thereof, and program
CN103970500B (en) * 2014-03-31 2017-03-29 小米科技有限责任公司 The method and device that a kind of picture shows
CN104182043B (en) * 2014-08-15 2017-05-03 北京智谷睿拓技术服务有限公司 Object picking-up method, object picking-up device and user equipment
CN104391646B (en) * 2014-11-19 2017-12-26 百度在线网络技术(北京)有限公司 The method and device of regulating object attribute information
CN105094612A (en) * 2015-07-30 2015-11-25 努比亚技术有限公司 Object selecting method and device
CN105554364A (en) * 2015-07-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal
CN106527878B (en) * 2015-09-09 2019-12-27 阿里巴巴集团控股有限公司 Object processing method and device
CN106569590B (en) * 2015-10-10 2019-09-03 华为技术有限公司 Object selection method and device
CN106896976A (en) * 2015-12-18 2017-06-27 中兴通讯股份有限公司 The changing method of the object that cursor is chosen, device and terminal
CN106875205B (en) * 2016-07-11 2020-08-04 阿里巴巴集团控股有限公司 Object selection method and device
CN106354391B (en) * 2016-08-31 2019-05-17 维沃移动通信有限公司 A kind of control method and mobile terminal of mobile terminal
CN106937055A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107644425B (en) * 2017-09-30 2018-08-24 湖南友哲科技有限公司 Target image choosing method, device, computer equipment and storage medium
CN107837529B (en) * 2017-11-15 2019-08-27 腾讯科技(上海)有限公司 A kind of object selection method, device, terminal and storage medium
CN107967096A (en) * 2017-11-24 2018-04-27 网易(杭州)网络有限公司 Destination object determines method, apparatus, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105075237A (en) * 2013-02-28 2015-11-18 索尼公司 Image processing apparatus, image processing method, and program
CN104079812A (en) * 2013-03-25 2014-10-01 联想(北京)有限公司 Method and device of acquiring image information
CN105549853A (en) * 2015-02-16 2016-05-04 上海逗屋网络科技有限公司 Method and device for determining target operation object on touch terminal
CN104853096A (en) * 2015-04-30 2015-08-19 广东欧珀移动通信有限公司 Rotation camera-based shooting parameter determination method and terminal
CN105302434A (en) * 2015-06-16 2016-02-03 深圳市腾讯计算机系统有限公司 Method and device for locking targets in game scene
CN105912589A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic device
CN107395986A (en) * 2017-08-28 2017-11-24 联想(北京)有限公司 Image acquiring method, device and electronic equipment
CN108366203A (en) * 2018-03-01 2018-08-03 北京金山安全软件有限公司 Composition method, composition device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109151320A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
US10796405B2 (en) Image processing apparatus and method, and non-transitory computer-readable storage medium storing program
KR102079091B1 (en) Terminal and image processing method thereof
US10043300B2 (en) Image processing apparatus, control method, and record medium for selecting a template for laying out images
CN106570028B (en) Mobile terminal and method and device for deleting blurred image
KR101725884B1 (en) Automatic processing of images
EP3477942B1 (en) White balance processing method, electronic device and computer readable storage medium
CN112954196B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN106060422B (en) A kind of image exposure method and mobile terminal
CN110717058B (en) Information recommendation method and device and storage medium
CN106454086B (en) Image processing method and mobile terminal
CN112673311B (en) Method, software product, camera arrangement and system for determining artificial lighting settings and camera settings
WO2019209924A1 (en) Systems and methods for image capture and processing
US10880478B2 (en) Camera, system and method of selecting camera settings
CN109151320B (en) Target object selection method and device
CN105117249B (en) A kind of method and device of android terminal addition desktop plug-ins
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
CN110727810A (en) Image processing method, image processing device, electronic equipment and storage medium
US10674095B2 (en) Image processing apparatus and control method for controlling the same
JP2016126595A (en) Image processing device, image processing method, and program
CN106815237B (en) Search method, search device, user terminal and search server
JP6669390B2 (en) Information processing apparatus, information processing method, and program
JP6679847B2 (en) Shelving allocation information generating device, shelving allocation information generating system, shelving allocation information generating method, imaging device, and program
US20190347780A1 (en) Image processing apparatus, non-transitory computer-readable storage medium, and generation method of photographic data
US10235032B2 (en) Method for optimizing a captured photo or a recorded multi-media and system and electric device therefor
WO2017134751A1 (en) Image-capturing device, image processing device, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant