WO2020015368A1 - 一种图像处理方法及装置 - Google Patents

一种图像处理方法及装置 Download PDF

Info

Publication number
WO2020015368A1
WO2020015368A1 PCT/CN2019/077303 CN2019077303W WO2020015368A1 WO 2020015368 A1 WO2020015368 A1 WO 2020015368A1 CN 2019077303 W CN2019077303 W CN 2019077303W WO 2020015368 A1 WO2020015368 A1 WO 2020015368A1
Authority
WO
WIPO (PCT)
Prior art keywords
gaze
trajectory
image
track
user
Prior art date
Application number
PCT/CN2019/077303
Other languages
English (en)
French (fr)
Inventor
刘琳
秦林婵
黄通兵
Original Assignee
北京七鑫易维信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京七鑫易维信息技术有限公司 filed Critical 北京七鑫易维信息技术有限公司
Publication of WO2020015368A1 publication Critical patent/WO2020015368A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Definitions

  • the present application relates to the field of image processing, and in particular, to an image processing method and device.
  • a computer can automatically determine the outline of a target object in an image to be processed.
  • the image features of the image to be processed may be extracted, so that the outline of the target object in the image to be processed is determined according to the extracted image features.
  • the contour of the target object determined by the computer may not be accurate.
  • the technical problem to be solved in the present application is the problem that when determining the outline of a target object in an image to be processed, the determined outline of the target object may not be accurate, and an image processing method and device are provided.
  • an embodiment of the present application provides an image processing method, including:
  • the first gaze trajectory is a gaze trajectory obtained because the user's gaze point moves along the outline of a target object in the image to be processed;
  • An outline of the target object is determined according to the first gaze trajectory.
  • the method further includes:
  • modifying the first gaze track In response to a gaze track modification instruction triggered by a user, modifying the first gaze track to obtain a modified first gaze track;
  • determining the outline of the target object in the image to be processed according to the first gaze track includes:
  • An outline of a target object in the image to be processed is determined according to the modified first gaze trajectory.
  • the gaze trajectory modification instruction is generated in the following manner:
  • the gaze track modification instruction is generated; the preset area corresponds to a function of modifying the first gaze track.
  • the method further includes:
  • the method further includes:
  • the analyzing the target image to obtain the analysis information of the target area image includes:
  • the method further includes:
  • analysis information satisfies a preset condition, modifying the first gaze trajectory; wherein the analysis information satisfies a preset condition, indicating that the data amount of the analysis information is less than or equal to a preset number threshold;
  • An outline of the target object is determined according to a gaze track obtained by modifying the first gaze track.
  • the modifying the first gaze track includes:
  • the second fixation trajectory and the first fixation trajectory do not coincide at all, and the fusion processing is performed on the first fixation trajectory and the second fixation trajectory to obtain a third fixation trajectory, including:
  • the third gaze track is generated using the first gaze track and the second gaze track.
  • the fourth gaze trajectory does not completely coincide with the first gaze trajectory.
  • the deleting all or part of the first gaze track includes:
  • the overlapping gaze track is deleted from the first gaze track.
  • an image processing apparatus including:
  • the obtaining unit is configured to obtain an image to be processed and a first fixation trajectory of a user; wherein the first fixation trajectory is a fixation obtained by moving a fixation point of the user along a contour of a target object in the image to be processed Trajectory
  • a first determining unit is configured to determine an outline of the target object according to the first gaze trajectory.
  • the device further includes:
  • a first modification unit configured to modify the first gaze trajectory in response to a user-intentioned gaze trajectory modification instruction to obtain a modified first gaze trajectory
  • the first determining unit is configured to:
  • An outline of a target object in the image to be processed is determined according to the modified first gaze trajectory.
  • the gaze trajectory modification instruction is generated in the following manner:
  • the gaze track modification instruction is generated; the preset area corresponds to a function of modifying the first gaze track.
  • the device further includes:
  • the display unit is configured to display prompt information configured to prompt the user to control the user's gaze point to move along the outline of the target object.
  • the device further includes:
  • a second determining unit configured to determine a target image from the to-be-processed image according to the outline of the target object
  • the analysis unit is configured to analyze the target image to obtain analysis information of the target image.
  • the analysis unit is set to:
  • the device further includes:
  • the second modification unit is configured to modify the first gaze trajectory if the analysis information satisfies a preset condition; wherein the analysis information satisfies a preset condition, indicating that the data amount of the analysis information is less than or a preset number threshold ;
  • a third determining unit is configured to determine an outline of the target object according to a fixation trajectory obtained by modifying the first fixation trajectory.
  • the modifying the first gaze track includes:
  • the second fixation trajectory and the first fixation trajectory do not coincide at all, and the fusion processing is performed on the first fixation trajectory and the second fixation trajectory to obtain a third fixation trajectory, including:
  • the third gaze track is generated using the first gaze track and the second gaze track.
  • the fourth gaze trajectory does not completely coincide with the first gaze trajectory.
  • the deleting all or part of the first gaze track includes:
  • the overlapping gaze track is deleted from the first gaze track.
  • An embodiment of the present application provides an image processing method and device, including: acquiring an image to be processed and a first gaze trajectory of a user; wherein the first gaze trajectory is caused by the user's gaze point along the to-be-processed image A fixation trajectory obtained by moving the outline of the target object; and determining the outline of the target object according to the first fixation trajectory.
  • the outline of the target object is determined by the user's indirect participation. Since the user can intuitively determine the target object in the image to be processed, and control the gaze point The outline of the target object in the image to be processed moves. Therefore, the outline of the target object can be accurately determined according to the first gaze trajectory.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for obtaining related information about a target object according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • the computer can automatically determine the outline of the target object in the image to be processed.
  • the image features of the image to be processed may be extracted, so that the outline of the target object in the image to be processed is determined according to the extracted image features.
  • the contour of the target object determined by the computer may not be accurate.
  • the inventor of the present application has found through research that in order to improve the accuracy of determining the outline of the target object in the image, the user can be indirectly involved. Since the user can intuitively see the target object in the image to be processed, The indirect participation of users can improve the accuracy of determining the outline of the target object.
  • embodiments of the present application provide an image processing method and apparatus, including: acquiring an image to be processed and a first gaze trajectory of a user; wherein the first gaze trajectory is caused by the gaze point of the user A gaze trajectory obtained by moving a contour of a target object in an image to be processed; and determining the contour of the target object according to the first gaze trajectory.
  • the outline of the target object is determined by the user's indirect participation. Since the user can intuitively determine the target object in the image to be processed, and control the gaze point The outline of the target object in the image to be processed moves. Therefore, the outline of the target object can be accurately determined according to the first gaze trajectory.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 1, in the embodiment of the present application, the method may be implemented through the following steps S101-S102.
  • S101 Acquire an image to be processed and a first gaze trajectory of a user; wherein the first gaze trajectory is a gaze trajectory obtained because the user's gaze point moves along the outline of a target object in the image to be processed.
  • the embodiments of the present application do not specifically limit the to-be-processed images, and the to-be-processed images may be various images generated by using virtual reality technology or various images generated by using augmented reality technology.
  • target object mentioned in the embodiments of the present application refers to the object included in the image to be processed.
  • the first gaze trajectory of the user may be obtained by analyzing the eye movement of the user.
  • the first gaze track may be a gaze track on the image to be processed.
  • S102 Determine the outline of the target object according to the first gaze track.
  • the first gaze trajectory is a gaze trajectory obtained by moving the gaze point of the user along the outline of the target object in the image to be processed. Therefore, the first gaze trajectory can characterize the outline of the target object to a certain extent. Therefore, the outline of the target object can be determined according to the first gaze trajectory.
  • step S102 when step S102 is specifically implemented, multiple implementation manners may be included.
  • the first gaze track may be directly used as a contour of the target object.
  • image features of the image to be processed may also be extracted, and the contour of the target object may be determined in combination with the image features and the first gaze track.
  • the image processing method provided by the embodiment of the present application is used to determine the outline of the target object, and the user is indirectly involved. Since the user can intuitively determine the target object when facing the image to be processed, And controlling the fixation point to move along the outline of the target object in the image to be processed. Therefore, the outline of the target object can be accurately determined according to the first gaze trajectory.
  • the first gaze track may also be modified to obtain a modified first gaze track.
  • the outline of the target object is determined based on the modified first gaze trajectory.
  • the first gaze track may be modified to obtain a modified first gaze track.
  • the gaze track modification instruction is used to trigger modification of the first gaze track.
  • the embodiment of the present application does not specifically limit the generation manner of the gaze track modification instruction.
  • the gaze track modification instruction may be generated in the following manner:
  • the gaze point of the user is in a preset area in the image to be processed, generating the gaze track modification instruction; wherein the preset area corresponds to a function of modifying the first gaze track.
  • the embodiment of the present application does not specifically limit the specific position of the preset area, and the position of the preset area in the image to be processed may be specifically determined according to an actual situation.
  • the preset area may correspond to corresponding text, such as "modify", to prompt the user that the preset area corresponds to the function of modifying the first gaze track.
  • the gaze trajectory modification instruction may be generated based on other operations of a user. For example, if the user presses a corresponding button, such as a "modify" button, the gaze track modification instruction is generated.
  • modifying the first gaze track may include any one or more of the following three cases.
  • a second gaze trajectory of the user is acquired, and a fusion process is performed on the first gaze trajectory and the second gaze trajectory to obtain a third gaze trajectory.
  • the second gaze track refers to a gaze track formed by the user's gaze point movement after the user triggers the gaze track modification instruction.
  • the first fixation trajectory and the second fixation trajectory may not coincide at all, and the partial contour of the target object corresponding to the first fixation trajectory and the target object corresponding to the second fixation trajectory The partial contours are actually connected together. Therefore, the first fixation trajectory and the second fixation trajectory cannot well characterize the outline of the target object.
  • the first fixation trajectory and the second fixation trajectory may be fused to obtain a third fixation trajectory.
  • the third gaze trajectory can better characterize the contour of the target object.
  • Fusion processing is performed on the first fixation trajectory and the second fixation trajectory to obtain a third fixation trajectory.
  • the first fixation trajectory and the fixation trajectory triggered by the user may be used to use the first fixation trajectory and the The second gaze trajectory generates the third gaze trajectory.
  • the embodiment of the present application does not specifically limit the specific implementation manner of generating the gaze track fusion instruction.
  • the implementation of generating the gaze track fusion instruction is similar to the manner of generating the gaze track modification instruction. Specifically, in a possible implementation manner, if the gaze point of the user is located in an area corresponding to the function of fused gaze trajectory, the gaze trajectory fusion instruction is generated. In another possible implementation manner, the gaze trajectory fusion instruction may be generated based on other operations of a user. For example, if the user presses a corresponding button, such as a "fusion" button, the gaze track fusion instruction is generated.
  • the second case acquiring a fourth gaze trajectory of the user and adding the fourth gaze trajectory.
  • the fourth gaze trajectory refers to a gaze trajectory formed by the user's gaze point movement after the user triggers the gaze trajectory modification instruction.
  • the first gaze trajectory may only represent a part of the outline of the target object. Therefore, in the embodiment of the present application, an increase operation may be performed on the first gaze trajectory.
  • the fourth fixation trajectory does not completely coincide with the first fixation trajectory, or the fourth fixation trajectory and the first fixation The trajectories do not coincide at all.
  • the third case delete all or part of the gaze tracks in the first gaze track.
  • the trajectories may exist in the first gaze trajectory, which may be a trajectory formed by a gaze point located in an area outside the outline of the target object. That is, there may be some or all trajectories in the first gaze trajectory, and the part or all of the trajectories cannot represent the outline of the target object. Therefore, in the embodiment of the present application, a deletion operation may be performed on the first gaze point estimation to delete a gaze track that cannot be used to characterize the contour of the target object in the first gaze track.
  • deleting all or part of the gaze track of the first gaze track may be implemented by the following steps A-B.
  • Step A Obtain a fifth gaze track of the user, and determine an overlapping gaze track of the fifth gaze track and the first gaze track.
  • Step B Delete the overlapping gaze track from the first gaze track.
  • the fifth gaze trajectory refers to a gaze trajectory formed by the user's gaze point movement after the user triggers the gaze trajectory modification instruction.
  • step A and step B it should be noted that the user can control his or her gaze point to move along part or all of the gaze track determined by the user to be deleted, thereby generating a fifth gaze track.
  • part or all of the fixation trajectories that the user wishes to be deleted appear in both the fifth fixation trajectory and the first fixation trajectory, so that the first fixation can be performed from the first fixation.
  • the overlapping gaze track of the fifth gaze track and the first gaze track is deleted.
  • the user may not know how to participate in determining the outline of the target object indirectly. Therefore, in a possible implementation manner of the embodiment of the present application, the user may also be prompted how to participate in the determination indirectly.
  • the outline of the target object may be prompted.
  • prompt information for prompting the user to control the user's gaze point to move along the outline of the target object may be displayed.
  • the embodiment of the present application does not specifically limit the specific content of the prompt information used to prompt the user to control the user's gaze point to move along the outline of the target object.
  • the prompt information for prompting the user to control the user's gaze point to move along the outline of the target object may include text information, and / or, audio information, and / or, video information.
  • the user may determine, according to the prompt information, a manner of indirectly participating in determining the outline of the target object by controlling the user's gaze point to move along the outline of the target object.
  • prompt information for prompting the user how to modify the first gaze track may also be displayed.
  • the embodiment of the present application does not specifically limit the specific content of the prompt information used to prompt the user how to modify the first gaze track, and as an example, the used to prompt the user how to modify the first gaze track
  • the prompt information of the gaze track may include any one or more of the following information:
  • a rule of how to perform an increase operation on the first gaze track a rule of how to perform a delete operation on the first gaze track, a rule of how to perform a fusion operation on the first gaze track and the second gaze track, and the like.
  • the prompt information for prompting the user may include text information, and / or, audio information, and / or, video information.
  • the user may want to know more information related to the target object. For example, the category of the target object, and other information.
  • FIG. 2 is a schematic flowchart of a method for obtaining related information about the target object according to an embodiment of the present application. As shown in FIG. 2, the method may be implemented through the following steps S201-S202.
  • S201 Determine a target image from the to-be-processed image according to the outline of the target object.
  • the embodiment of the present application does not specifically limit the specific implementation manner of determining the target image from the to-be-processed image according to the contour of the target object.
  • the target image may be an image within the outline of the target object.
  • the target image may be an image including the image within the target contour and the contour of the target object.
  • S202 Analyze the target image to obtain analysis information of the target object area image.
  • the embodiments of the present application do not specifically limit the analysis information.
  • the analysis information may include a category of the target object, and / or information that is related to the target object.
  • step S202 When step S202 is specifically implemented, it may be implemented in any one or both of the following two ways.
  • the first way search the target image to obtain information that is related to the target object in the target image.
  • a corresponding search engine may be called, and the search engine may be used to search the target image. Thereby, information having relevance to the target object is obtained.
  • the embodiments of the present application do not specifically limit specific information included in the information that is related to the target object.
  • the information related to the target object may include, for example, information about the breed, preferences, and living habits of the puppy.
  • the second method performing image recognition on the target image to obtain a category of the target object in the target image.
  • a module having an image recognition function may be called to identify the target image, thereby obtaining the category of the target object. It is also possible to extract the image features of the target image and analyze the image features to obtain the category of the target object.
  • the strange embodiment itself does not specifically limit the category of the target object.
  • the category of the target object may be "puppy", and the category of the target object may also be "animal”.
  • the analysis information may be further displayed so that the user can view the analysis information.
  • the outline of the target object is determined based on the first fixation trajectory. Therefore, when the data amount of the analysis information is relatively small, the first fixation trajectory may be further modified to further re-determine the outline of the target object.
  • steps C-D may be further included.
  • Step C If the analysis information satisfies a preset condition, modify the first gaze trajectory; wherein the analysis information satisfies a preset condition, indicating that the data amount of the analysis information is less than or equal to a preset number threshold.
  • the embodiments of the present application do not specifically limit the preset number threshold, and the preset number threshold may be determined according to an actual situation.
  • Step D Determine the outline of the target object according to the fixation trajectory obtained by modifying the first fixation trajectory.
  • the above embodiment describes an image processing method. The method is described below in conjunction with a specific scene.
  • the user is experiencing virtual reality technology, and the user wants to know information about the target object in the image displayed on the display screen in the virtual reality system.
  • FIG. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 3, the method may be implemented through the following steps S301 to S311.
  • S301 Display prompt information for prompting the user to control the user's gaze point to move along the outline of the target object in the image to be processed.
  • the image to be processed mentioned here refers to an image displayed on a display screen in a virtual reality system.
  • S302 Acquire a to-be-processed image and a user's first gaze trajectory; wherein the first gaze trajectory is a gaze trajectory obtained by moving the gaze point of the user along the contour of the target object.
  • S303 In response to a gaze track modification instruction triggered by a user, modify the first gaze track to obtain a modified first gaze track.
  • S304 Determine a first contour of the target object according to the modified first gaze track.
  • S305 Determine a first target image from the to-be-processed image according to the first contour of the target object.
  • S306 Analyze the first target image to obtain first analysis information of the first target image.
  • step S306 after obtaining the first analysis information of the first target image, the first analysis information may be further displayed.
  • S307 Determine that the data amount of the first analysis information is less than a preset number threshold, and modify the first gaze track.
  • S308 Determine a second contour of the target object according to a gaze track obtained by modifying the first gaze track.
  • S309 Determine a second target image from the to-be-processed image according to the second contour of the target object.
  • S310 Analyze the second target image to obtain second analysis information of the target image.
  • S311 Determine that the data amount of the second analysis information is greater than a preset number threshold, and display the second analysis information.
  • step S311 if the user considers that the second analysis information is inaccurate, the user may also actively trigger the modification of the first gaze track.
  • FIG. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, as shown in FIG. 4.
  • the apparatus 400 may specifically include: an obtaining unit 410 and a first determining unit 420.
  • the obtaining unit 410 is configured to obtain an image to be processed and a first fixation trajectory of a user; wherein the first fixation trajectory is obtained by moving a fixation point of the user along a contour of a target object in the image to be processed Gaze track
  • the first determining unit 420 is configured to determine an outline of the target object according to the first gaze trajectory.
  • the apparatus 400 further includes:
  • a first modification unit configured to modify the first gaze trajectory in response to a user-intentioned gaze trajectory modification instruction to obtain a modified first gaze trajectory
  • the first determining unit 420 is configured to:
  • An outline of a target object in the image to be processed is determined according to the modified first gaze trajectory.
  • the gaze trajectory modification instruction is generated in the following manner:
  • the gaze track modification instruction is generated; the preset area corresponds to a function of modifying the first gaze track.
  • the apparatus 400 further includes:
  • the display unit is configured to display prompt information configured to prompt the user to control the user's gaze point to move along the outline of the target object.
  • the apparatus 400 further includes:
  • a second determining unit configured to determine a target image from the to-be-processed image according to the outline of the target object
  • the analysis unit is configured to analyze the target image to obtain analysis information of the target image.
  • the analysis unit is set to:
  • the apparatus 400 further includes:
  • the second modification unit is configured to modify the first gaze trajectory if the analysis information satisfies a preset condition; wherein the analysis information satisfies a preset condition, indicating that a data amount of the analysis information is less than or equal to a preset amount Threshold
  • a third determining unit is configured to determine an outline of the target object according to a fixation trajectory obtained by modifying the first fixation trajectory.
  • the modifying the first gaze track includes:
  • the second fixation trajectory and the first fixation trajectory do not coincide at all, and the fusion processing is performed on the first fixation trajectory and the second fixation trajectory to obtain a third fixation trajectory, including:
  • the third gaze track is generated using the first gaze track and the second gaze track.
  • the fourth gaze trajectory does not completely coincide with the first gaze trajectory.
  • the deleting all or part of the first gaze track includes:
  • the overlapping gaze track is deleted from the first gaze track.
  • the outline of the target object is determined by the user's indirect participation. Since the user can intuitively determine the target object in the image to be processed, and control the gaze point The outline of the target object in the image to be processed moves. Therefore, the outline of the target object can be accurately determined according to the first gaze trajectory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种图像处理方法及装置,包括:获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;根据所述第一注视轨迹确定所述目标对象的轮廓。由此可见,本申请实施例中,确定目标对象的轮廓,采用用户间接参与的方式,由于在面对所述待处理图像时,用户可以直观的确定出其中的目标对象,并控制注视点沿所述待处理图像中的目标对象的轮廓移动。故而,根据所述第一注视轨迹能够准确的确定目标对象的轮廓。

Description

一种图像处理方法及装置 技术领域
本申请涉及图像处理领域,特别是涉及一种图像处理方法及装置。
背景技术
目前的图像处理技术,计算机可以自动确定待处理图像中目标对象的轮廓。具体地,可以提取待处理图像的图像特征,从而根据提取的图像特征确定待处理图像中目标对象的轮廓。
但是,由于计算机的计算性能,和计算机提取的图像特征的数量等限制因素,从而导致计算机确定的目标对象的轮廓可能并不准确。
发明内容
本申请所要解决的技术问题是确定待处理图像中目标对象的轮廓时,确定的目标对象的轮廓可能并不准确的问题,提供一种图像处理方法及装置。
第一方面,本申请实施例提供一种图像处理方法,包括:
获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;
根据所述第一注视轨迹确定所述目标对象的轮廓。
可选的,所述方法还包括:
响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹;
相应的,所述根据所述第一注视轨迹确定所述待处理图像中目标对象的轮廓,包括:
根据所述修改后的第一注视轨迹确定所述待处理图像中目标对象的轮廓。
可选的,所述注视轨迹修改指令通过如下方式生成:
若所述用户的注视点位于所述待处理图像中的预设区域,生成所述注视轨迹修改指令;所述预设区域对应修改所述第一注视轨迹的功能。
可选的,所述方法还包括:
显示用于提示所述用户控制所述用户的注视点沿所述目标对象的轮廓移 动的提示信息。
可选的,所述方法还包括:
根据所述目标对象的轮廓,从所述待处理图像中确定目标图像;对所述目标图像进行分析,得到所述目标图像的分析信息。
可选的,所述对所述目标图像进行分析,得到所述目标区域图像的分析信息,包括:
对所述目标图像进行搜索,得到与所述目标图像中的所述目标对象具有相关性的信息;
和/或,
对所述目标图像进行图像识别,得到所述目标图像中的所述目标对象的类别。
可选的,所述方法还包括:
若所述分析信息满足预设条件,修改所述第一注视轨迹;其中,所述分析信息满足预设条件,表示所述分析信息的数据量小于或者等于预设数量阈值;
根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的轮廓。
可选的,所述修改所述第一注视轨迹,包括:
获取所述用户的第二注视轨迹,对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹;和/或,
获取所述用户的第四注视轨迹,增加所述第四注视轨迹;和/或,
删除所述第一注视轨迹的全部或部分注视轨迹。
可选的,所述第二注视轨迹和所述第一注视轨迹完全不重合,所述对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹,包括:
响应于用户触发的注视轨迹融合指令,利用所述第一注视轨迹和所述第二注视轨迹生成所述第三注视轨迹。
可选的,所述第四注视轨迹与所述第一注视轨迹不完全重合或完全不重合。
可选的,所述删除所述第一注视轨迹的全部或部分注视轨迹,包括:
获取所述用户的第五注视轨迹,并确定所述第五注视轨迹与所述第一注视轨迹的重叠注视轨迹;
从所述第一注视轨迹中删除所述重叠注视轨迹。
第二方面,本申请实施例提供一种图像处理装置,包括:
获取单元,设置为获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;
第一确定单元,设置为根据所述第一注视轨迹确定所述目标对象的轮廓。
可选的,所述装置还包括:
第一修改单元,设置为响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹;
相应的,所述第一确定单元,设置为:
根据所述修改后的第一注视轨迹确定所述待处理图像中目标对象的轮廓。
可选的,所述注视轨迹修改指令通过如下方式生成:
若所述用户的注视点位于所述待处理图像中的预设区域,生成所述注视轨迹修改指令;所述预设区域对应修改所述第一注视轨迹的功能。
可选的,所述装置还包括:
显示单元,设置为显示设置为提示所述用户控制所述用户的注视点沿所述目标对象的轮廓移动的提示信息。
可选的,所述装置还包括:
第二确定单元,设置为根据所述目标对象的轮廓,从所述待处理图像中确定目标图像;
分析单元,设置为对所述目标图像进行分析,得到所述目标图像的分析信息。
可选的,所述分析单元,设置为:
对所述目标图像进行搜索,得到与所述目标图像中的所述目标对象具有相关性的信息;
和/或,
对所述目标图像进行图像识别,得到所述目标图像中的所述目标对象的类别。
可选的,所述装置还包括:
第二修改单元,设置为若所述分析信息满足预设条件,修改所述第一注视轨迹;其中,所述分析信息满足预设条件,表示所述分析信息的数据量小于或者预设数量阈值;
第三确定单元,设置为根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的轮廓。
可选的,所述修改所述第一注视轨迹,包括:
获取所述用户的第二注视轨迹,对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹;和/或,
获取所述用户的第四注视轨迹,增加所述第四注视轨迹;和/或,
删除所述第一注视轨迹的全部或部分注视轨迹。
可选的,所述第二注视轨迹和所述第一注视轨迹完全不重合,所述对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹,包括:
响应于用户触发的注视轨迹融合指令,利用所述第一注视轨迹和所述第二注视轨迹生成所述第三注视轨迹。
可选的,所述第四注视轨迹与所述第一注视轨迹不完全重合或完全不重合。
可选的,所述删除所述第一注视轨迹的全部或部分注视轨迹,包括:
获取所述用户的第五注视轨迹,并确定所述第五注视轨迹与所述第一注视轨迹的重叠注视轨迹;
从所述第一注视轨迹中删除所述重叠注视轨迹。
与现有技术相比,本申请实施例具有以下优点:
本申请实施例提供一种图像处理方法及装置,包括:获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;根据所述第一注视轨迹确定所述目标对象的轮廓。由此可见,本申请实施例中,确定目标 对象的轮廓,采用用户间接参与的方式,由于在面对所述待处理图像时,用户可以直观的确定出其中的目标对象,并控制注视点沿所述待处理图像中的目标对象的轮廓移动。故而,根据所述第一注视轨迹能够准确的确定目标对象的轮廓。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种图像处理方法的流程示意图;
图2为本申请实施例提供的一种得到与目标对象的相关信息的方法的流程示意图;
图3为本申请实施例提供的一种图像处理方法的流程示意图;
图4为本申请实施例提供的一种图像处理装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
一方面,本申请的发明人经过研究发现,现有技术中,目前的图像处理技术,计算机可以自动确定待处理图像中目标对象的轮廓。具体地,可以提取待处理图像的图像特征,从而根据提取的图像特征确定待处理图像中目标对象的轮廓。但是,由于计算机的计算性能,和计算机提取的图像特征的数量等限制因素,从而导致计算机确定的目标对象的轮廓可能并不准确。
另一方面,本申请的发明人经过研究发现,为了提高确定图像中目标对象轮廓的准确性,可以采用用户间接参与的方式,由于用户可以直观的看到待处理图像中的目标对象,因此,采用用户间接参与的方式,可以提高确定目标对象的轮廓的准确性。
鉴于此,本申请实施例提供了一种图像处理方法及装置,包括:获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;根据所述第一注视轨迹确定所述目标对象的轮廓。由此可见,本申请实施例中,确定目标对象的轮廓,采用用户间接参与的方式,由于在面对所述待处理图像时,用户可以直观的确定出其中的目标对象,并控制注视点沿所述待处理图像中的目标对象的轮廓移动。故而,根据所述第一注视轨迹能够准确的确定目标对象的轮廓。
下面结合附图,详细说明本申请的各种非限制性实施方式。
示例性方法
图1为本申请实施例提供的一种图像处理方法的流程示意图,如图1所示,在本申请实施例中,所述方法可以通过如下步骤S101-S102实现。
S101:获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹。
需要说明的是,本申请实施例提供的技术方案,可以应用在虚拟现实技术以及增强现实技术中。
需要说明的是,本申请实施例不具体限定所述待处理图像,所述待处理图像可以为利用虚拟现实技术生成的各种图像,也可以为利用增强现实技术生成的各种图像。
需要说明的是,本申请实施例中提及的目标对象,是指所述待处理图像中包含的对象。
需要说明的是,在本申请实施例中,可以通过用户的分析眼球运动获得用户的第一注视轨迹。所述第一注视轨迹可以是在所述待处理图像上的注视轨迹。
S102:根据所述第一注视轨迹确定所述目标对象的轮廓。
可以理解的是,由于所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹。因此,所述第一注视轨迹在一定程度上可以表征所述目标对象的轮廓。故而,可以根据所述第一 注视轨迹确定所述目标对象的轮廓。
需要说明的是,步骤S102在具体实现时,可以包括多种实现方式。
在一种可能的实现方式中,可以直接将所述第一注视轨迹作为所述目标对象的轮廓。
在又一种可能的实现方式中,还可以提取所述待处理图像的图像特征,结合所述图像特征以及所述第一注视轨迹确定所述目标对象的轮廓。
由此可见,采用本申请实施例提供的图像处理方法,确定目标对象的轮廓,采用用户间接参与的方式,由于在面对所述待处理图像时,用户可以直观的确定出其中的目标对象,并控制注视点沿所述待处理图像中的目标对象的轮廓移动。故而,根据所述第一注视轨迹能够准确的确定目标对象的轮廓。
需要说明的是,考虑到用户控制注视点沿着所述待处理图像中的目标对象的轮廓移动的过程中,可能会出现一些异常情况。从而导致所述第一注视轨迹不足以用于表征所述目标对象的轮廓,从而使得基于所述第一注视轨迹确定的目标对象的轮廓不够准确。故而,在本申请实施例中,还可以修改所述第一注视轨迹,得到修改后的第一注视轨迹。并基于修改后的第一注视轨迹确定所述目标对象的轮廓。
具体地,可以响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹。
需要说明的是,所述注视轨迹修改指令,用于触发修改所述第一注视轨迹。
本申请实施例不具体限定所述注视轨迹修改指令的生成方式。在申请实施例的一种可能的实现方式中,所述注视轨迹修改指令可以通过如下方式生成:
若所述用户的注视点位于所述待处理图像中的预设区域,生成所述注视轨迹修改指令;其中,所述预设区域对应修改所述第一注视轨迹的功能。
需要说明的是,本申请实施例对所述预设区域的具体位置不做具体限定,所述预设区域在所述待处理图像中的位置可以根据实际情况具体确定。
需要说明的是,在本申请实施例中,所述预设区域可以对应有相应的文 字,例如“修改”,以提示用户所述预设区域对应修改所述第一注视轨迹的功能。
在本申请实施例的另一种实现方式中,所述注视轨迹修改指令可以基于用户的其它操作生成。例如,若所述用户按下了相应的按钮,例如“修改”按钮,则生成所述注视轨迹修改指令。
需要说明的是,在本申请实施例中,修改所述第一注视轨迹,可以包含以下三种情况中的任意一种或多种。
第一种情况:获取所述用户的第二注视轨迹,对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹。
需要说明的是,所述第二注视轨迹,是指,用户触发所述注视轨迹修改指令之后,因所述用户的注视点移动所形成的注视轨迹。
考虑到实际应用中,所述第一注视轨迹和所述第二注视轨迹可能完全不重合,而所述第一注视轨迹对应的目标对象的部分轮廓和所述第二注视轨迹对应的目标对象的部分轮廓实际上是连接在一起的。故而,第一注视轨迹和第二注视轨迹并不能很好的表征所述目标对象的轮廓。
因此,在本申请实施例中,可以对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹。从而使得所述第三注视轨迹能够更好的表征所述目标对象的轮廓。
对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹,在具体实现时,可以响应于用户触发的注视轨迹融合指令,利用所述第一注视轨迹和所述第二注视轨迹生成所述第三注视轨迹。
需要说明的是,本申请实施例不具体限定生成所述注视轨迹融合指令的具体实现方式。生成所述注视轨迹融合指令的实现,与生成所述注视轨迹修改指令方式类似。具体地,在一种可能的实现方式中,若所述用户的注视点位于对应融合注视轨迹功能的区域,则生成所述注视轨迹融合指令。在另一种可能的实现方式中,所述注视轨迹融合指令可以基于用户的其它操作生成。例如,若所述用户按下了相应的按钮,例如“融合”按钮,则生成所述注视轨迹融合指令。
第二种情况:获取所述用户的第四注视轨迹,增加所述第四注视轨迹。
需要说明的是,与所述第三注视轨迹类似,所述第四注视轨迹,是指,用户触发所述注视轨迹修改指令之后,因所述用户的注视点移动所形成的注视轨迹。
可以理解的是,在实际应用中,所述第一注视轨迹可能只能表征所述目标对象的部分轮廓,因此,在本申请实施例中,可以对第一注视轨迹执行增加操作,增加可以用于表征所述目标对象的其它部分轮廓的第四注视轨迹。
需要说明的是,在本申请实施例的一种可能的实现方式中,所述第四注视轨迹与所述第一注视轨迹不完全重合,或者,所述第四注视轨迹与所述第一注视轨迹完全不重合。
第三种情况:删除所述第一注视轨迹中的全部或部分注视轨迹。
可以理解的是,在实际应用中,所述第一注视轨迹中可能存在部分或全部轨迹,可能是由位于所述目标对象的轮廓之外的区域的注视点形成的轨迹。也就是说,所述第一注视轨迹中可能存在部分或全部轨迹,该部分或全部轨迹不能表征所述目标对象的轮廓。因此,在本申请实施例中,可以对所述第一注视点估计执行删除操作,删除所述第一注视轨迹中不能用于表征所述目标对象的轮廓的注视轨迹。
具体地,所述删除所述第一注视轨迹的全部或部分注视轨迹,可以通过如下步骤A-B实现。
步骤A:获取所述用户的第五注视轨迹,并确定所述第五注视轨迹与所述第一注视轨迹的重叠注视轨迹。
步骤B:从所述第一注视轨迹中删除所述重叠注视轨迹。
需要说明的是,所述第五注视轨迹,是指,用户触发所述注视轨迹修改指令之后,因所述用户的注视点移动所形成的注视轨迹。
关于步骤A和步骤B,需要说明的是,用户可以控制自己的注视点沿着用户确定想要被删除的部分或全部注视轨迹移动,从而生成第五注视轨迹。这样一来,获取第五注视轨迹之后,用户希望被删除的部分或者全部注视轨迹既出现在所述第五注视轨迹中,又出现在所述第一注视轨迹中,从而可以从所述第一注视轨迹中删除所述第五注视轨迹与所述第一注视轨迹的重叠注视轨迹。
需要说明的是,考虑到在实际应用中,用户可能并不知道如何间接参与确定目标对象的轮廓,因此,在本申请实施例的一种可能的实现方式中,还可以提示用户如何间接参与确定目标对象的轮廓。
具体地,可以显示用于提示所述用户控制所述用户的注视点沿着所述目标对象的轮廓移动的提示信息。
需要说明的是,本申请实施例不具体限定用于提示所述用户控制所述用户的注视点沿着所述目标对象的轮廓移动的提示信息的具体内容。作为一种示例,所述用于提示所述用户控制所述用户的注视点沿着所述目标对象的轮廓移动的提示信息可以包括文字信息,和/或,音频信息,和/或,视频信息。用户可以根据所述提示信息,确定间接参与确定所述目标对象轮廓的方式为:控制用户的注视点沿着所述目标对象的轮廓移动。
相应的,用户触发注视轨迹修改指令之后,也可以显示用于提示所述用户如何修改所述第一注视轨迹的提示信息。
一方面,本申请实施例不具体限定用于提示所述用户如何修改所述第一注视轨迹的提示信息的具体内容,作为一种示例,所述用于提示所述用户如何修改所述第一注视轨迹的提示信息,可以包括以下任意一种或者多种信息:
如何对所述第一注视轨迹执行增加操作的规则、如何对所述第一注视轨迹执行删除操作的规则、如何对所述第一注视轨迹和所述第二注视轨迹执行融合操作的规则等。
另一方面,与所述用于提示所述用户控制所述用户的注视点沿着所述目标对象的轮廓移动的提示信息类似,所述用于提示所述用户如何修改所述第一注视轨迹的提示信息,可以包括文字信息,和/或,音频信息,和/或,视频信息。
可以理解的是,在实际应用中,用户可能希望得知与所述目标对象相关的更多信息。例如,所述目标对象的类别,以及其它信息等。
因此,在本申请实施例中,确定所述目标对象的轮廓之后,还可以进一步得到与所述目标对象相关的其它信息。
图2为本申请实施例提供的一种得到与所述目标对象的相关信息的方法的流程示意图,如图2所示,该方法可以通过如下步骤S201-S202实现。
S201:根据所述目标对象的轮廓,从所述待处理图像中确定目标图像。
需要说明的是,本申请实施例不具体限定根据所述目标对象的轮廓,从所述待处理图像中确定目标图像的具体实现方式。作为一种示例,所述目标图像可以为所述目标对象轮廓内的图像。作为又一种示例,所述目标图像可以为包括所述目标轮廓内的图像以及所述目标对象的轮廓的图像。
S202:对所述目标图像进行分析,得到所述目标对象区域图像的分析信息。
需要说明的是,本申请实施例不具体限定所述分析信息,作为一种示例,所述分析信息可以包括所述目标对象的类别,和/或,与所述目标对象具有相关性的信息。
步骤S202在具体实现时,可以通过以下两种方式中的任意一种或两种实现。
第一种方式:对所述目标图像进行搜索,得到与所述目标图像中的所述目标对象具有相关性的信息。
需要说明的是,在对所述目标图像进行搜索时,可以调用相应的搜索引擎,利用该搜索引擎对所述目标图像进行搜索。从而得到与所述目标对象具有相关性的信息。
需要说明的是,本申请实施例不具体限定与所述目标对象具有相关性的信息所包含的具体信息。作为一种示例,目标对象为一只小狗,则所述与所述目标对象具有相关性的信息,例如可以包括:该小狗的品种,喜好,生活习惯等信息。
第二种方式:对所述目标图像进行图像识别,得到所述目标图像中的所述目标对象的类别。
需要说明的是,对所述目标图像进行识别时,可以调用具有图像识别功能的模块,对所述目标图像进行识别,从而得到所述目标对象的类别。也可以提取所述目标图像的图像特征,对所述图像特征进行分析,从而得到所述目标对象的类别。
需要说明的是,本身奇怪实施例不具体限定所述目标对象的类别。例如,目标对象为一只小狗,则所述目标对象的类别可以为“小狗”,所述目标对象的类别也可以为“动物”。
需要说明的是,在本申请实施例的一种可能的实现方式中,得到所述分析信息之后,还可以显示所述分析信息,以便用户查看所述分析信息。
可以理解的是,所述目标对象的轮廓越准确,对所述目标图像进行分析得到的分析信息可能更多。因此,在一定程度上,可以利用所述分析信息的数据量衡量所述目标对象的轮廓的准确度。所述分析信息的数据量越少,则说明所述目标对象的轮廓可能越不准确。而所述目标对象的轮廓是基于第一注视轨迹确定的,故而,当所述分析信息的数据量比较小时,可以进一步修改所述第一注视轨迹,从而进一步重新确定所述目标对象的轮廓。
鉴于此,在本申请实施例的一种可能的实现方式中,还可以包括如下步骤C-D。
步骤C:若所述分析信息满足预设条件,修改所述第一注视轨迹;其中,所述分析信息满足预设条件,表示所述分析信息的数据量小于或者等于预设数量阈值。
需要说明的是,本申请实施例不具体限定所述预设数量阈值,所述预设数量阈值可以根据实际情况确定。
需要说明的是,所述修改所述第一注视轨迹的具体实现,可以参考上文关于“响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹”中,对“修改所述第一注视轨迹”的描述部分,此处不再赘述。
步骤D:根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的轮廓。
需要说明的是,根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的轮廓的具体实现方式,与步骤S102根据所述第一注视轨迹确定所述目标对象的轮廓的具体实现类似,故在此不再赘述。
以上实施例介绍了一种图像处理方法,以下结合具体场景介绍该方法。
在该场景中,用户正在体验虚拟现实技术,用户希望知晓虚拟现实系统中显示屏上显示的图像中的目标对象相关的信息。
图3为本申请实施例提供的一种图像处理方法的流程示意图,如图3所示,该方法可以通过如下步骤S301-S311实现。
S301:显示用于提示用户控制所述用户的注视点沿着待处理图像中目标对象的轮廓移动的提示信息。
需要说明的是,此处提及的待处理图像是指,虚拟现实系统中显示屏上显示的图像。
S302:获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述目标对象的轮廓移动所得到的注视轨迹。
S303:响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹。
S304:根据所述修改后的第一注视轨迹确定所述目标对象的第一轮廓。
S305:根据所述目标对象的第一轮廓,从所述待处理图像中确定第一目标图像。
S306:对所述第一目标图像进行分析,得到所述第一目标图像的第一分析信息。
需要说明的是,步骤S306和步骤S307之间,得到所述第一目标图像的第一分析信息之后,还可以显示所述第一分析信息。
S307:确定所述第一分析信息的数据量小于预设数量阈值,修改所述第一注视轨迹。
S308:根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的第二轮廓。
S309:根据所述目标对象的第二轮廓,从所述待处理图像中确定第二目标图像。
S310:对所述第二目标图像进行分析,得到所述目标图像的第二分析信息。
S311:确定所述第二分析信息的数据量大于预设数量阈值,显示所述第 二分析信息。
需要说明的是,在步骤S311之后,若用户认为该第二分析信息不准确,则用户也可以主动触发对第一注视轨迹的修改。
示例性设备
图4为本申请实施例提供的一种图像处理装置的结构示意图,如图4所示。
所述装置400可以具体包括:获取单元410和第一确定单元420。
获取单元410,设置为获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;
第一确定单元420,设置为根据所述第一注视轨迹确定所述目标对象的轮廓。
可选的,所述装置400还包括:
第一修改单元,设置为响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹;
相应的,所述第一确定单元420,设置为:
根据所述修改后的第一注视轨迹确定所述待处理图像中目标对象的轮廓。
可选的,所述注视轨迹修改指令通过如下方式生成:
若所述用户的注视点位于所述待处理图像中的预设区域,生成所述注视轨迹修改指令;所述预设区域对应修改所述第一注视轨迹的功能。
可选的,所述装置400还包括:
显示单元,设置为显示设置为提示所述用户控制所述用户的注视点沿所述目标对象的轮廓移动的提示信息。
可选的,所述装置400还包括:
第二确定单元,设置为根据所述目标对象的轮廓,从所述待处理图像中确定目标图像;
分析单元,设置为对所述目标图像进行分析,得到所述目标图像的分析信息。
可选的,所述分析单元,设置为:
对所述目标图像进行搜索,得到与所述目标图像中的所述目标对象具有相关性的信息;
和/或,
对所述目标图像进行图像识别,得到所述目标图像中的所述目标对象的类别。
可选的,所述装置400还包括:
第二修改单元,设置为若所述分析信息满足预设条件,修改所述第一注视轨迹;其中,所述分析信息满足预设条件,表示所述分析信息的数据量小于或者等于预设数量阈值;
第三确定单元,设置为根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的轮廓。
可选的,所述修改所述第一注视轨迹,包括:
获取所述用户的第二注视轨迹,对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹;和/或,
获取所述用户的第四注视轨迹,增加所述第四注视轨迹;和/或,
删除所述第一注视轨迹的全部或部分注视轨迹。
可选的,所述第二注视轨迹和所述第一注视轨迹完全不重合,所述对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹,包括:
响应于用户触发的注视轨迹融合指令,利用所述第一注视轨迹和所述第二注视轨迹生成所述第三注视轨迹。
可选的,所述第四注视轨迹与所述第一注视轨迹不完全重合或完全不重合。
可选的,所述删除所述第一注视轨迹的全部或部分注视轨迹,包括:
获取所述用户的第五注视轨迹,并确定所述第五注视轨迹与所述第一注视轨迹的重叠注视轨迹;
从所述第一注视轨迹中删除所述重叠注视轨迹。
关于所述装置400的各个单元的具体实现,可以参考以上方法实施例的描述部分,此处不再赘述。
由此可见,本申请实施例中,确定目标对象的轮廓,采用用户间接参与的方式,由于在面对所述待处理图像时,用户可以直观的确定出其中的目标对象,并控制注视点沿所述待处理图像中的目标对象的轮廓移动。故而,根据所述第一注视轨迹能够准确的确定目标对象的轮廓。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (24)

  1. 一种图像处理方法,包括:
    获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;
    根据所述第一注视轨迹确定所述目标对象的轮廓。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹;
    相应的,所述根据所述第一注视轨迹确定所述待处理图像中目标对象的轮廓,包括:
    根据所述修改后的第一注视轨迹确定所述待处理图像中目标对象的轮廓。
  3. 根据权利要求2所述的方法,其中,所述注视轨迹修改指令通过如下方式生成:
    若所述用户的注视点位于所述待处理图像中的预设区域,生成所述注视轨迹修改指令;所述预设区域对应修改所述第一注视轨迹的功能。
  4. 根据权利要求1-3任意一项所述的方法,其中,所述方法还包括:
    显示用于提示所述用户控制所述用户的注视点沿所述目标对象的轮廓移动的提示信息。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:
    根据所述目标对象的轮廓,从所述待处理图像中确定目标图像;对所述目标图像进行分析,得到所述目标图像的分析信息。
  6. 根据权利要求5所述的方法,其中,所述对所述目标图像进行分析,得到所述目标区域图像的分析信息,包括:
    对所述目标图像进行搜索,得到与所述目标图像中的所述目标对象具有相关性的信息;
    和/或,
    对所述目标图像进行图像识别,得到所述目标图像中的所述目标对象的类别。
  7. 根据权利要求5所述的方法,其中,所述方法还包括:
    若所述分析信息满足预设条件,修改所述第一注视轨迹;其中,所述分析信息满足预设条件,表示所述分析信息的数据量小于或者等于预设数量阈值;
    根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的轮廓。
  8. 根据权利要求2或7所述的方法,其中,所述修改所述第一注视轨迹,包括:
    获取所述用户的第二注视轨迹,对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹;和/或,
    获取所述用户的第四注视轨迹,增加所述第四注视轨迹;和/或,
    删除所述第一注视轨迹的全部或部分注视轨迹。
  9. 根据权利要求8所述的方法,其中,所述第二注视轨迹和所述第一注视轨迹完全不重合,所述对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹,包括:
    响应于用户触发的注视轨迹融合指令,利用所述第一注视轨迹和所述第二注视轨迹生成所述第三注视轨迹。
  10. 根据权利要求8所述的方法,其中,所述第四注视轨迹与所述第一注视轨迹不完全重合或完全不重合。
  11. 根据权利要求8所述的方法,其中,所述删除所述第一注视轨迹的全部或部分注视轨迹,包括:
    获取所述用户的第五注视轨迹,并确定所述第五注视轨迹与所述第一注视轨迹的重叠注视轨迹;
    从所述第一注视轨迹中删除所述重叠注视轨迹。
  12. 一种图像处理装置,包括:
    获取单元,设置为获取待处理图像以及用户的第一注视轨迹;其中,所述第一注视轨迹是因所述用户的注视点沿所述待处理图像中的目标对象的轮廓移动所得到的注视轨迹;
    第一确定单元,设置为根据所述第一注视轨迹确定所述目标对象的轮廓。
  13. 根据权利要求12所述的装置,其中,所述装置还包括:
    第一修改单元,设置为响应于用户触发的注视轨迹修改指令,修改所述第一注视轨迹,得到修改后的第一注视轨迹;
    相应的,所述第一确定单元,设置为:
    根据所述修改后的第一注视轨迹确定所述待处理图像中目标对象的轮廓。
  14. 根据权利要求13所述的装置,其中,所述注视轨迹修改指令通过如下方式生成:
    若所述用户的注视点位于所述待处理图像中的预设区域,生成所述注视轨迹修改指令;所述预设区域对应修改所述第一注视轨迹的功能。
  15. 根据权利要求12-14任意一项所述的装置,其中,所述装置还包括:
    显示单元,设置为显示设置为提示所述用户控制所述用户的注视点沿所述目标对象的轮廓移动的提示信息。
  16. 根据权利要求12所述的装置,其中,所述装置还包括:
    第二确定单元,设置为根据所述目标对象的轮廓,从所述待处理图像中确定目标图像;
    分析单元,设置为对所述目标图像进行分析,得到所述目标图像的分析信息。
  17. 根据权利要求16所述的装置,其中,所述分析单元,设置为:
    对所述目标图像进行搜索,得到与所述目标图像中的所述目标对象具有相关性的信息;
    和/或,
    对所述目标图像进行图像识别,得到所述目标图像中的所述目标对象的类别。
  18. 根据权利要求16所述的装置,其中,所述装置还包括:
    第二修改单元,设置为若所述分析信息满足预设条件,修改所述第一注视轨迹;其中,所述分析信息满足预设条件,表示所述分析信息的数据量小于或者预设数量阈值;
    第三确定单元,设置为根据修改第一注视轨迹得到的注视轨迹确定所述目标对象的轮廓。
  19. 根据权利要求13或18所述的装置,其中,所述修改所述第一注视轨迹,包括:
    获取所述用户的第二注视轨迹,对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹;和/或,
    获取所述用户的第四注视轨迹,增加所述第四注视轨迹;和/或,
    删除所述第一注视轨迹的全部或部分注视轨迹。
  20. 根据权利要求19所述的装置,其中,所述第二注视轨迹和所述第一注视轨迹完全不重合,所述对所述第一注视轨迹和所述第二注视轨迹进行融合处理,得到第三注视轨迹,包括:
    响应于用户触发的注视轨迹融合指令,利用所述第一注视轨迹和所述第二注视轨迹生成所述第三注视轨迹。
  21. 根据权利要求19所述的装置,其中,所述第四注视轨迹与所述第一注视轨迹不完全重合或完全不重合。
  22. 根据权利要求19所述的装置,其中,所述删除所述第一注视轨迹的全部或部分注视轨迹,包括:
    获取所述用户的第五注视轨迹,并确定所述第五注视轨迹与所述第一注视轨迹的重叠注视轨迹;
    从所述第一注视轨迹中删除所述重叠注视轨迹。
  23. 一种存储介质,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至11中任意一项所述的图像处理方法。
  24. 一种图像处理设备,包括存储器和处理器,
    所述存储器存储有计算机程序;
    所述处理器,设置为执行所述存储器中存储的计算机程序,所述计算机程序运行时执行权利要求1至11中任意一项所述的图像处理方法。
PCT/CN2019/077303 2018-07-16 2019-03-07 一种图像处理方法及装置 WO2020015368A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810778492.5 2018-07-16
CN201810778492.5A CN108874148A (zh) 2018-07-16 2018-07-16 一种图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2020015368A1 true WO2020015368A1 (zh) 2020-01-23

Family

ID=64302439

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077303 WO2020015368A1 (zh) 2018-07-16 2019-03-07 一种图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN108874148A (zh)
WO (1) WO2020015368A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108874148A (zh) * 2018-07-16 2018-11-23 北京七鑫易维信息技术有限公司 一种图像处理方法及装置
CN109491508B (zh) * 2018-11-27 2022-08-26 北京七鑫易维信息技术有限公司 一种确定注视对象的方法和装置
CN112860059A (zh) * 2021-01-08 2021-05-28 广州朗国电子科技有限公司 一种基于眼球追踪的图像识别方法、设备及存储介质
CN113359996A (zh) * 2021-08-09 2021-09-07 季华实验室 生活辅助机器人控制系统、方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984942A (zh) * 2014-05-28 2014-08-13 深圳市中兴移动通信有限公司 一种物象识别的方法及移动终端
CN105913487A (zh) * 2016-04-09 2016-08-31 北京航空航天大学 一种基于人眼图像中虹膜轮廓分析匹配的视线方向计算方法
CN106033609A (zh) * 2015-07-24 2016-10-19 广西科技大学 仿生物跳跃眼动信息处理机制的目标轮廓检测方法
US20160366332A1 (en) * 2014-06-05 2016-12-15 Huizhou Tcl Mobile Communication Co. , Ltd Processing method and system for automatically photographing based on eyeball tracking technology
CN106340057A (zh) * 2016-08-31 2017-01-18 北京像素软件科技股份有限公司 一种三维物体的边缘检测方法
CN107687818A (zh) * 2016-08-04 2018-02-13 纬创资通股份有限公司 三维量测方法及三维量测装置
CN108874148A (zh) * 2018-07-16 2018-11-23 北京七鑫易维信息技术有限公司 一种图像处理方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110161875A1 (en) * 2009-12-29 2011-06-30 Nokia Corporation Method and apparatus for decluttering a mapping display
CN102156539B (zh) * 2011-03-28 2012-05-02 浙江大学 一种基于眼动扫描的目标物体识别方法
CN106681484B (zh) * 2015-11-06 2019-06-25 北京师范大学 结合眼动跟踪的图像目标分割系统
CN105677018A (zh) * 2015-12-28 2016-06-15 宇龙计算机通信科技(深圳)有限公司 一种屏幕截图方法、装置和电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984942A (zh) * 2014-05-28 2014-08-13 深圳市中兴移动通信有限公司 一种物象识别的方法及移动终端
US20160366332A1 (en) * 2014-06-05 2016-12-15 Huizhou Tcl Mobile Communication Co. , Ltd Processing method and system for automatically photographing based on eyeball tracking technology
CN106033609A (zh) * 2015-07-24 2016-10-19 广西科技大学 仿生物跳跃眼动信息处理机制的目标轮廓检测方法
CN105913487A (zh) * 2016-04-09 2016-08-31 北京航空航天大学 一种基于人眼图像中虹膜轮廓分析匹配的视线方向计算方法
CN107687818A (zh) * 2016-08-04 2018-02-13 纬创资通股份有限公司 三维量测方法及三维量测装置
CN106340057A (zh) * 2016-08-31 2017-01-18 北京像素软件科技股份有限公司 一种三维物体的边缘检测方法
CN108874148A (zh) * 2018-07-16 2018-11-23 北京七鑫易维信息技术有限公司 一种图像处理方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SUN, FUCHUAN ET AL.: "Top-Down Information Processing in Cognition Eye Movements", ACTA BIOPHYSICA SINICA, vol. 10, no. 3, 30 September 1994 (1994-09-30), pages 431 *

Also Published As

Publication number Publication date
CN108874148A (zh) 2018-11-23

Similar Documents

Publication Publication Date Title
WO2020015368A1 (zh) 一种图像处理方法及装置
US10192583B2 (en) Video editing using contextual data and content discovery using clusters
WO2019109643A1 (zh) 视频推荐方法、装置、计算机设备和存储介质
US8164644B2 (en) Method and apparatus for generating media signal by using state information
US20200249751A1 (en) User input processing with eye tracking
US8191004B2 (en) User feedback correlated to specific user interface or application features
KR101811909B1 (ko) 제스처 인식을 위한 장치 및 방법
KR102092931B1 (ko) 시선 추적 방법 및 이를 수행하기 위한 사용자 단말
WO2017092257A1 (zh) 一种现场直播中的共同收看仿真方法和装置
US9799099B2 (en) Systems and methods for automatic image editing
CN113678206B (zh) 用于高级脑功能障碍的康复训练系统及图像处理装置
JP4858375B2 (ja) 情報処理装置及びプログラム
US20200053336A1 (en) Information processing apparatus, information processing method, and storage medium
KR20180074180A (ko) 가상현실 영상에 대한 정보를 제공하는 장치 및 방법
WO2018135334A1 (ja) 情報処理装置、および情報処理方法、並びにコンピュータ・プログラム
CN112969097A (zh) 内容播放方法和装置、内容评论方法和装置
WO2022188386A9 (zh) 视频发布方法及装置、设备
US11514924B2 (en) Dynamic creation and insertion of content
JP7409134B2 (ja) 画像処理方法、画像処理プログラム、および画像処理装置
US20210089783A1 (en) Method for fast visual data annotation
WO2022179415A1 (zh) 视听作品的展示方法、装置、设备及介质
WO2022110844A1 (zh) 自动加载字幕的方法及电子设备
JP2019140571A (ja) コンテンツ再生制御装置、コンテンツ再生システムおよびプログラム
US10244196B2 (en) Display control apparatus and display control method
JP2020025221A (ja) コミュニケーション支援装置、コミュニケーション支援システム及び通信方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19838541

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/05/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19838541

Country of ref document: EP

Kind code of ref document: A1