WO2020077912A1 - Procédé, dispositif et dispositif matériel de traitement d'image - Google Patents

Procédé, dispositif et dispositif matériel de traitement d'image Download PDF

Info

Publication number
WO2020077912A1
WO2020077912A1 PCT/CN2019/073082 CN2019073082W WO2020077912A1 WO 2020077912 A1 WO2020077912 A1 WO 2020077912A1 CN 2019073082 W CN2019073082 W CN 2019073082W WO 2020077912 A1 WO2020077912 A1 WO 2020077912A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour feature
feature point
inner contour
target object
point
Prior art date
Application number
PCT/CN2019/073082
Other languages
English (en)
Chinese (zh)
Inventor
范旭
李琰
杨辉
沈言浩
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2020077912A1 publication Critical patent/WO2020077912A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an image processing method, device, and hardware device.
  • APP Application, referred to as: APP
  • an APP that can realize functions such as dark light detection, beauty camera and super pixels.
  • the beautification function of the smart terminal usually includes beautification processing effects such as skin tone adjustment, dermabrasion, big eyes, and thin face, and can perform the same degree of beautification processing on all the faces recognized in the image.
  • beautification processing effects such as skin tone adjustment, dermabrasion, big eyes, and thin face, and can perform the same degree of beautification processing on all the faces recognized in the image.
  • APPs that can achieve simple special effects.
  • the current special effects function can only pre-set the special effects and synthesize them into the video or image. If you need to modify the special effects, you need to re-create the special effects and then synthesize them into the video or image, making the special effects very inflexible.
  • An image processing method includes: segmenting a target image to obtain a contour of a target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point;
  • the generating the inner contour feature point of the target object according to the contour of the target object includes generating the inner contour feature point along the contour line of the target object.
  • the generating an outer contour feature point according to the inner contour feature point includes generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
  • the generating the outer contour feature point in the direction of the inner contour feature point away from the target object according to the inner contour feature point includes: the first inner contour feature point and The line segment connected by the second inner contour feature point is a vertical line, and the first inner contour feature point and the second inner contour feature point are adjacent inner contour feature points; in the direction of the vertical line away from the target object Take the first point, the length of the line segment between the first point and the first inner contour feature point is a predetermined length, and use the first point as the outer contour feature corresponding to the first inner contour feature point point.
  • the generating an outer contour feature point in the direction of the inner contour feature point away from the target object according to the inner contour feature point further includes: generating two different outer contour features via the third inner contour feature point Points are the first auxiliary outer contour feature point and the second auxiliary outer contour feature point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; the first auxiliary outer contour feature point and the second outer contour feature The intersection of the line where the point is located and the line where the second auxiliary outer contour feature point and the fourth outer contour feature point are located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, so
  • the fourth outer contour feature point is an outer frame contour feature point corresponding to a fourth inner contour feature point, wherein the second inner contour feature point and the fourth inner contour feature point are associated with the third inner contour feature Two inner contour feature points adjacent to the point; use the intersection point as the third outer contour feature point corresponding to the third inner contour feature point.
  • the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
  • the filling of the preset material into the area between the inner contour feature point and the outer contour feature point includes: filling the material into two adjacent inner contour feature points and the The filling operation in the area formed by the two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points; repeat the above filling operation until all the inner contour feature points and the outer contour features The areas between the points are filled with the material.
  • the segmentation of the target image to obtain the outline of the target object includes: acquiring a video; segmenting the video frame image in the video; separating the target object in the video frame image from other objects to obtain The outline of the target object.
  • the method further includes: setting a correspondence between the preset material and the target image.
  • An image processing device including:
  • the contour acquisition module is used to segment the target image to obtain the contour of the target object
  • An inner contour feature point generating module configured to generate an inner contour feature point of the target object according to the contour of the target object
  • An outer contour feature point generating module configured to generate an outer contour feature point according to the inner contour feature point
  • the filling module is used to fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  • the inner contour feature point generating module is configured to generate the inner contour feature point along the contour line of the target object.
  • the outer contour feature point generating module includes an outer contour feature point generating sub-module for generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
  • the outer contour feature point generating sub-module is used to make a vertical line through the first inner contour feature point to a line segment connected by the first inner contour feature point and the second inner contour feature point, the first inner An outline feature point and the second inner outline feature point are adjacent inner outline feature points; a first point is taken in the direction of the vertical line away from the target object, the first point and the first inner outline feature point The length of the line segment between the points is a predetermined length, and the first point is used as the outer contour feature point corresponding to the first inner contour feature point.
  • the outer contour feature point generation sub-module is also used to generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature, respectively Point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; calculate the line where the first auxiliary outer contour feature point and the second outer contour feature point are located, and the second auxiliary outer contour feature point and the fourth outer contour feature The intersection point of the straight line where the point is located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, and the fourth outer contour feature point is the outer contour feature point corresponding to the fourth inner contour feature point A frame outline feature point, wherein the second inner outline feature point and the fourth inner outline feature point are two inner outline feature points adjacent to the third inner outline feature point; The third outer contour feature point corresponding to the third inner contour feature point.
  • the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
  • the filling module is configured to fill the material into two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points The filling operation in the area of the above; repeat the above filling operation until all the areas between the inner contour feature point and the outer contour feature point are filled with the material.
  • the contour acquisition module is used to acquire video; segment the video frame image in the video; separate the target object in the video frame image from other objects to obtain the contour of the target object.
  • the image processing device further includes a correspondence relationship setting module, configured to set a correspondence relationship between the preset material and the target image.
  • An electronic device includes: a memory for storing non-transitory computer-readable instructions; and a processor for running the computer-readable instructions so that the processor executes any of the above image processing methods when executed A step of.
  • a computer-readable storage medium is used to store non-transitory computer-readable instructions.
  • the non-transitory computer-readable instructions are executed by a computer, the computer is caused to perform the steps described in any of the above methods.
  • the present disclosure discloses an image processing method, device, and hardware device.
  • the image processing method includes: segmenting the target image to obtain the contour of the target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point;
  • the target object to be processed can be segmented from the image, and the material can be added to the relevant area of the target object to form a special effect. When modifying the special effect, only the material needs to be modified without re-editing the image again , Improve the efficiency and flexibility of special effects production.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a method for generating an outline feature point according to the present disclosure
  • FIG. 3 is a schematic diagram of a material filling method according to the present disclosure.
  • FIG. 4 is a schematic diagram of the effect after an image is processed by an image processing method according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • An embodiment of the present disclosure provides an image processing method.
  • the image processing method provided in this embodiment may be executed by a computing device, which may be implemented as software or a combination of software and hardware.
  • the computing device may be integrated in a server, a terminal device, or the like.
  • the image processing method mainly includes the following steps S101 to S104. among them:
  • Step S101 Segment the target image to obtain the outline of the target object
  • the target image may be any image.
  • the target image is a picture, where the target image includes a target object, and the target object may be any object.
  • the target The object is the human body.
  • the target image is segmented, and the objects in the image are segmented and separated from other objects to obtain the outline of the target object.
  • the target image when the target image is a video, the video needs to be acquired first; the video frame image in the video is segmented; the target object in the video frame image is separated from other objects to obtain the target object ’s profile.
  • Image segmentation is generally divided into interactive image segmentation and automatic image segmentation.
  • Traditional image processing generally uses interactive image segmentation, which requires human participation in image segmentation.
  • automatic image segmentation is used, and the following uses human body image segmentation as an example to describe automatic image segmentation.
  • automatic human image segmentation methods can be divided into the following types: (1) Model-based human image segmentation methods. For this method, the human face is first detected based on the prior knowledge of the human face, and then the torso model is used to find The torso under the human face, then estimate the position of the lower body according to the segmented torso, and finally use the estimated torso and upper leg regions to provide seed points for image segmentation to complete the segmentation of the human body image; (2) based on the hierarchical tree Human body image segmentation method.
  • the adjacent body parts are first modeled, then the entire human body pose is modeled, and the different body poses are modeled as the sum of nodes on different paths in the hierarchical detection tree, Different layers in the hierarchical detection tree correspond to different models of adjacent human parts, along different paths on the hierarchical detection tree, corresponding to different human postures, are detected down the root node of the tree during detection, and are segmented along different paths Different postures of the human body; (3) based on the independent component analysis of the reference signal, the human body image segmentation method, for this method, the first The human face is detected according to the prior knowledge of the human face, then the torso model is used to find the torso under the human face, and then the reference signal is obtained from the detected torso, and then the torso is highlighted from the image using the independent component analysis method of the reference signal Complete the segmentation of the torso, the segmentation of other body parts is similar, and finally complete the segmentation of the entire human body image; (4) The human body image segmentation method based on
  • Step S102 Generate an inner contour feature point of the target object according to the contour of the target object
  • an inner contour feature point of the target object is generated according to the contour of the target object obtained in step S101, and the inner contour feature point may be located directly on the contour line of the target object, or maintained at a predetermined distance from the contour line, For example, the inner contour feature point can be kept at a distance of 0.1 cm from the contour line. In one embodiment, the distance between the inner contour feature points is the same, that is to say, the inner contour feature points are evenly distributed with respect to the contour of the target object.
  • Step S103 generating an outer contour feature point according to the inner contour feature point
  • an outer contour feature point is generated according to the inner contour feature point generated in step S102.
  • the inner contour feature point is generated in a direction away from the target object according to the inner contour feature point
  • the generating process may be an interpolation process. Taking the target object as a human body for example, the inner contour feature points are located on the contour line of the human body, and for each inner contour feature point, an outer contour feature point corresponding to it is generated outside the human body.
  • first inner contour feature point and the second inner contour feature point are perpendicular to a line segment connected by the first inner contour feature point and the second inner contour feature point, and the first inner contour feature point and the second inner contour feature point
  • the contour feature points are adjacent inner contour feature points; the first point is taken in the direction of the vertical line away from the target object, and the length of the line segment between the first point and the first inner contour feature point is a predetermined length , Using the first point as the outer contour feature point corresponding to the first inner contour feature point. After that, the above two steps may be repeated until all the first inner contour points generate corresponding outer contour feature points.
  • the third inner contour feature point which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature point, where the third inner contour point is the inner contour feature point at the contour inflection point ; Calculate the intersection of the line where the first auxiliary outer contour feature point and the second outer contour feature point are located and the line where the second auxiliary outer contour feature point and the fourth outer contour feature point are located, where the second outer contour feature point is An outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer frame contour feature point corresponding to a fourth inner contour feature point, wherein the second inner contour feature point and the The fourth inner contour feature point is two inner contour feature points adjacent to the third inner contour feature point; the intersection point is used as the third outer contour feature point corresponding to the third inner contour feature point.
  • the inner contour feature points include points 1, 2, 3, 4, 5, and 6, of which points 1, 2, 3, 5, 6
  • point 4 is the contour inflection point of the target object, which corresponds to the third inner contour point in the above embodiment.
  • point 2 Take point 2 as an example, make a vertical line to line segment 12 through point 2, and take a point b in the direction of the vertical line away from the target object, and make the length of line segment 1b H, where H is the preset length, for each
  • H is the preset length
  • point 2 in addition to point 1 being adjacent to it, there is also point 3 adjacent to it, so do it after point 2 A vertical line, making it perpendicular to the line segment 23, in this embodiment, point 1, point 2 and point 3 are located on the same straight line, so the vertical line of line segment 12 passing through point 2 coincides with the vertical line of line segment 23,
  • the two points b obtained at this time coincide, and it can be determined that the point b is the outer contour feature point corresponding to the point 2, thus repeating the above operation for each point can obtain the inner contour feature points 1, 2, 3, 5, 6
  • Step S104 Fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  • the preset material may be a color card with a fixed size, and in this step, the material is filled into the area between the inner contour feature point and the outer contour feature point, To form a stroke on the target object.
  • the filling process is: the material is filled into two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points In the composed area; repeat the above filling operation until all the areas between the inner contour feature point and the outer contour feature point are filled with the material.
  • Figure 3 shows an example of material filling.
  • the length of the material is L
  • the width is H
  • the distance between the inner contour feature points is L
  • the distance between the outer contour feature points is H
  • the area 1ab2 is the size of a material, and the preset material just fills it, as shown in the shaded area of the area 1ab2 in FIG. 3.
  • the length and width attributes of the preset material are obtained in advance, and when the inner contour feature point is generated in step S102, the inner contour feature point is sampled on the contour of the target object using the length of the preset material as the sampling distance
  • the outer contour feature points of the inner contour feature points are taken with the width of the preset material.
  • the distance between the inner contour feature points is not L, which may be n times or 1 / n times of L. At this time, when the material is filled, the material may be stretched or expanded. After filling.
  • the method before segmenting the target image to obtain the outline of the target object, the method further includes: setting a correspondence between the preset material and the target image.
  • multiple materials may be prepared in advance, corresponding to multiple target images.
  • the multiple target objects may be pictures or video frames of video.
  • the target image is a video frame of video
  • multiple The video frame sets the corresponding material to generate different stroke effects for each frame of video.
  • the stroke effect will change as the video plays.
  • the method before segmenting the target image to obtain the outline of the target object, the method further includes: selecting the target object.
  • the target object may be any object that can be segmented from the target image.
  • the target object may be a human body, various animals such as cats, dogs, plants, buildings, and the like.
  • different object segmentation algorithms are called, and users can flexibly adjust the objects that need to be segmented.
  • the target image before segmenting the target image to obtain the outline of the target object, it further includes: selecting the serial number of the target object to be segmented.
  • the serial number of the target object that needs to be processed can be set in advance, if the serial number is set to 1, the first The segmented human body performs image processing in the present disclosure. If the setting needs to be 0, then the image processing in the present disclosure is performed on all segmented human bodies.
  • the display properties of the outline can be set, for example, a certain section of the outline can be set not to be displayed in a certain frame or some frames or a certain section of the outline is displayed randomly, etc., so that the effect of the material flickering appears.
  • the target object is a human body
  • the human body is stroked to highlight the position of the human body in the image .
  • the present disclosure discloses an image processing method, device, and hardware device.
  • the image processing method includes: segmenting the target image to obtain the contour of the target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point;
  • the target object to be processed can be segmented from the image, and the material can be added to the relevant area of the target object to form a special effect.
  • the special effect only the material needs to be modified without re-editing the image , Improve the efficiency and flexibility of special effects production.
  • the following is a device embodiment of the present disclosure.
  • the device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
  • Only parts related to the embodiment of the present disclosure are shown. Specific technical details are not disclosed. Please Refer to the method embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an image processing device.
  • the device may perform the steps described in the above embodiments of the image processing method.
  • the device 500 mainly includes a contour acquisition module 501, an inner contour feature point generation module 502, an outer contour feature point generation module 503 and a filling module 504. among them,
  • the contour acquisition module 501 is used to segment the target image to obtain the contour of the target object
  • An inner contour feature point generating module 502 configured to generate an inner contour feature point of the target object according to the contour of the target object;
  • the filling module 504 is used to fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  • the inner contour feature point generating module 502 is configured to generate the inner contour feature point along the contour line of the target object, wherein the distance between adjacent inner contour feature points is the same.
  • the distance between the adjacent inner contour feature points is the length of the material.
  • the outer contour feature point generation module 503 includes an outer contour feature point generation sub-module for generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
  • the outer contour feature point generating sub-module makes a vertical line through the first inner contour feature point to a line segment connected by the first inner contour feature point and the second inner contour feature point, and the first inner contour feature
  • the point and the second inner contour feature point are adjacent inner contour feature points; the first point is taken in the direction of the vertical line away from the target object, the first point and the first inner contour feature point
  • the length of the line segment between is a predetermined length, and the first point is used as the outer contour feature point corresponding to the first inner contour feature point.
  • the outer contour feature point generation sub-module is also used to generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature, respectively Point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; calculate the line where the first auxiliary outer contour feature point and the second outer contour feature point are located, and the second auxiliary outer contour feature point and the fourth outer contour feature The intersection point of the straight line where the point is located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, and the fourth outer contour feature point is the outer contour feature point corresponding to the fourth inner contour feature point A frame outline feature point, wherein the second inner outline feature point and the fourth inner outline feature point are two inner outline feature points adjacent to the third inner outline feature point; The third outer contour feature point corresponding to the third inner contour feature point.
  • the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
  • the filling module 504 is configured to fill the material to two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points Filling operation in the composed area; repeating the above filling operation until all areas between the inner contour feature point and the outer contour feature point are filled with the material.
  • the contour acquisition module 501 is used to acquire a video; segment the video frame image in the video; and separate the target object in the video frame image from other objects to obtain the contour of the target object.
  • the image processing apparatus 500 further includes a correspondence relationship setting module, which is used to set a correspondence relationship between the preset material and the target image.
  • the device shown in FIG. 5 can execute the method of the embodiment shown in FIG.
  • FIG. 6 shows a schematic structural diagram of an electronic device 600 suitable for implementing embodiments of the present disclosure.
  • Electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • in-vehicle terminals e.g. Mobile terminals such as car navigation terminals
  • fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from the storage device 608
  • a processing device such as a central processing unit, a graphics processor, etc.
  • the program in the memory (RAM) 603 performs various appropriate operations and processes.
  • various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other via a bus 604.
  • An input / output (I / O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I / O interface 605: include input devices 606 including, for example, touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc .; An output device 607 such as a vibrator; a storage device 608 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 4 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing device 601 the above-mentioned functions defined in the method of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer diskettes, hard drives, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wire, optical cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: segment the target image to obtain the outline of the target object; according to the target object The contour generates an inner contour feature point of the target object; generates an outer contour feature point according to the inner contour feature point; fills a preset material into the area between the inner contour feature point and the outer contour feature point .
  • the computer program code for performing the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above programming languages include object-oriented programming languages such as Java, Smalltalk, C ++, and also include conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider Internet connection for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.

Abstract

L'invention concerne un procédé, un dispositif et un dispositif matériel de traitement d'image. Le procédé de traitement d'image consiste à : segmenter une image cible pour obtenir un contour d'un objet cible ; produire des points caractéristiques de contour intérieurs de l'objet cible selon le contour de l'objet cible ; produire des points caractéristiques de contour extérieurs selon les points caractéristiques de contour intérieurs ; remplir d'un matériau prédéfini une région entre les points caractéristiques de contour intérieurs et les points caractéristiques de contour extérieurs. Dans le procédé de traitement d'image des modes de réalisation de la présente invention, il est possible de segmenter à partir d'une image un objet cible qui doit être traité et d'ajouter un matériau à une région pertinente de l'objet cible pour former un effet spécial ; lors de la modification de l'effet spécial, il est uniquement nécessaire de modifier le matériau et il n'est pas nécessaire de rééditer l'image à nouveau, ce qui améliore l'efficacité et la flexibilité de production d'un effet spécial.
PCT/CN2019/073082 2018-10-19 2019-01-25 Procédé, dispositif et dispositif matériel de traitement d'image WO2020077912A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811222643.5A CN110070554A (zh) 2018-10-19 2018-10-19 图像处理方法、装置、硬件装置
CN201811222643.5 2018-10-19

Publications (1)

Publication Number Publication Date
WO2020077912A1 true WO2020077912A1 (fr) 2020-04-23

Family

ID=67365892

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073082 WO2020077912A1 (fr) 2018-10-19 2019-01-25 Procédé, dispositif et dispositif matériel de traitement d'image

Country Status (2)

Country Link
CN (1) CN110070554A (fr)
WO (1) WO2020077912A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308769B (zh) * 2020-10-30 2022-06-10 北京字跳网络技术有限公司 图像合成方法、设备及存储介质
CN112581620A (zh) * 2020-11-30 2021-03-30 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114125320B (zh) * 2021-08-31 2023-05-09 北京达佳互联信息技术有限公司 一种图像特效的生成方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751852A (en) * 1996-04-29 1998-05-12 Xerox Corporation Image structure map data structure for spatially indexing an imgage
US6300955B1 (en) * 1997-09-03 2001-10-09 Mgi Software Corporation Method and system for mask generation
CN101950427A (zh) * 2010-09-08 2011-01-19 东莞电子科技大学电子信息工程研究院 一种适用于移动终端的矢量线段轮廓化方法
CN104520901A (zh) * 2012-08-09 2015-04-15 高通股份有限公司 具有虚线模式的路径的gpu加速再现
CN105513006A (zh) * 2014-10-16 2016-04-20 北京汉仪科印信息技术有限公司 一种TrueType字体轮廓粗细调整方法及装置
CN108399654A (zh) * 2018-02-06 2018-08-14 北京市商汤科技开发有限公司 描边特效程序文件包的生成及描边特效生成方法与装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751852A (en) * 1996-04-29 1998-05-12 Xerox Corporation Image structure map data structure for spatially indexing an imgage
US6300955B1 (en) * 1997-09-03 2001-10-09 Mgi Software Corporation Method and system for mask generation
CN101950427A (zh) * 2010-09-08 2011-01-19 东莞电子科技大学电子信息工程研究院 一种适用于移动终端的矢量线段轮廓化方法
CN104520901A (zh) * 2012-08-09 2015-04-15 高通股份有限公司 具有虚线模式的路径的gpu加速再现
CN105513006A (zh) * 2014-10-16 2016-04-20 北京汉仪科印信息技术有限公司 一种TrueType字体轮廓粗细调整方法及装置
CN108399654A (zh) * 2018-02-06 2018-08-14 北京市商汤科技开发有限公司 描边特效程序文件包的生成及描边特效生成方法与装置

Also Published As

Publication number Publication date
CN110070554A (zh) 2019-07-30

Similar Documents

Publication Publication Date Title
WO2020186935A1 (fr) Procédé et dispositif d'affichage d'objet virtuel, appareil électronique, et support de stockage lisible par ordinateur
WO2020077913A1 (fr) Procédé et dispositif de traitement d'image, et dispositif matériel
JP7199527B2 (ja) 画像処理方法、装置、ハードウェア装置
CN110189246B (zh) 图像风格化生成方法、装置及电子设备
CN110070496B (zh) 图像特效的生成方法、装置和硬件装置
JP2024505995A (ja) 特殊効果展示方法、装置、機器および媒体
US11776209B2 (en) Image processing method and apparatus, electronic device, and storage medium
WO2020077912A1 (fr) Procédé, dispositif et dispositif matériel de traitement d'image
WO2020192195A1 (fr) Procédé et appareil de traitement d'image, et dispositif électronique
CN110047121B (zh) 端到端的动画生成方法、装置及电子设备
CN110378947B (zh) 3d模型重建方法、装置及电子设备
WO2021098361A1 (fr) Procédé d'édition de carte topographique, dispositif, appareil électronique et support lisible par ordinateur
CN110035271B (zh) 保真图像生成方法、装置及电子设备
CN109754464B (zh) 用于生成信息的方法和装置
CN113806306B (zh) 媒体文件处理方法、装置、设备、可读存储介质及产品
CN112714263B (zh) 视频生成方法、装置、设备及存储介质
CN110378948B (zh) 3d模型重建方法、装置及电子设备
CN113205601A (zh) 漫游路径生成方法、装置、存储介质及电子设备
CN110069641B (zh) 图像处理方法、装置和电子设备
CN114422698B (zh) 视频生成方法、装置、设备及存储介质
CN111292247A (zh) 图像处理方法和装置
CN111275799B (zh) 动画的生成方法、装置和电子设备
CN110390717B (zh) 3d模型重建方法、装置及电子设备
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质
CN111200705B (zh) 图像处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19872790

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19872790

Country of ref document: EP

Kind code of ref document: A1