CN113283307A - Method and system for identifying object in video and computer storage medium - Google Patents

Method and system for identifying object in video and computer storage medium Download PDF

Info

Publication number
CN113283307A
CN113283307A CN202110481032.8A CN202110481032A CN113283307A CN 113283307 A CN113283307 A CN 113283307A CN 202110481032 A CN202110481032 A CN 202110481032A CN 113283307 A CN113283307 A CN 113283307A
Authority
CN
China
Prior art keywords
target
quasi
frame
frames
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110481032.8A
Other languages
Chinese (zh)
Inventor
马哲
刘剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Thunderstone Technology Co ltd
Original Assignee
Beijing Thunderstone Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Thunderstone Technology Co ltd filed Critical Beijing Thunderstone Technology Co ltd
Priority to CN202110481032.8A priority Critical patent/CN113283307A/en
Publication of CN113283307A publication Critical patent/CN113283307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention discloses a method and a system for identifying an object in a video and a computer storage medium. Wherein, the method comprises the following steps: dividing a current frame in a target video into a plurality of areas; respectively identifying and anchoring each area, and obtaining a plurality of anchored borders in each area; each frame corresponds to a probability value and a central point, and each central point corresponds to an object type; selecting all quasi-target central points corresponding to target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames; and removing the duplication of the plurality of quasi-target frames to obtain the target object frame. According to the method, the species screening is added between the screening area and the de-duplication technology, so that the uninteresting species can be screened out, the de-duplication calculation amount is reduced, the data processing speed is improved, and the time is saved; furthermore, the identification type can be dynamically changed in the video playing process.

Description

Method and system for identifying object in video and computer storage medium
Technical Field
The invention relates to the technical field of object identification, in particular to a method and a system for identifying an object in a video and a computer storage medium.
Background
Object recognition, one of the most common applications in computer vision, is to draw individual frames in an image, such as to identify people, cars, animals, plants, etc., the image in the frame containing the object to be recognized as completely as possible.
In the prior art, the object is identified and then the type screening is carried out, and the method has certain disadvantages that redundant area data needs to be processed when NMS calculation is carried out due to the fact that the area data is large when the object is identified, the calculation amount is large, and time is wasted.
Aiming at the problems of large calculation amount and time consumption of object recognition in the prior art, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a method and a system for identifying an object in a video and a computer storage medium, which are used for solving the problems of large calculation amount and time consumption of identifying the object in the prior art.
In order to achieve the above object, in one aspect, the present invention provides a method for identifying an object in a video, including: dividing a current frame in a target video into a plurality of areas; respectively identifying and anchoring each area, and obtaining a plurality of anchored borders in each area; each frame corresponds to a probability value and a central point, and each central point corresponds to an object type; selecting all quasi-target central points corresponding to target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames; and removing the duplication of the plurality of quasi-target frames to obtain the target object frame.
Optionally, the target type of the current frame is the same as or different from the target type of the previous frame.
Optionally, the current frame in the target video is divided into a plurality of regions; identifying and anchoring each region respectively, wherein after obtaining a plurality of anchored borders in each region, the method comprises the following steps: and judging whether the probability values corresponding to all the frames in each region are smaller than a preset probability value, if so, deleting the region corresponding to the probability value, otherwise, reserving the region.
Optionally, the multiple quasi-target frames are deduplicated to obtain a target object frame. The method comprises the following steps: and reserving a corresponding quasi-target frame for each target object to obtain a plurality of different target object frames.
Optionally, the step of retaining a corresponding one of the quasi target frames for each of the target objects includes: obtaining intersection ratio values of all the quasi-target frames corresponding to each target object through an NMS algorithm; and reserving a corresponding quasi-target frame for each target object according to the intersection ratio.
In another aspect, the present invention provides a system for identifying an object in a video, including: the dividing unit is used for dividing the current frame in the target video into a plurality of areas; respectively identifying and anchoring each area, and obtaining a plurality of anchored borders in each area; each frame corresponds to a probability value and a central point, and each central point corresponds to an object type; the frame selection unit is used for selecting all quasi-target central points corresponding to the target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames; and the duplication removing unit is used for removing duplication of the plurality of quasi-target frames to obtain the target object frame.
Optionally, the method further includes: and the screening unit is used for judging whether the probability values corresponding to all the frames in each region are smaller than a preset probability value, deleting the region corresponding to the probability value if the probability values are smaller than the preset probability value, and otherwise, reserving the regions.
Optionally, the deduplication unit includes: and the reservation module is used for reserving the corresponding quasi-target frame for each target object to obtain a plurality of different target object frames.
Optionally, the reservation module includes: the calculation module is used for obtaining intersection ratio values of all the quasi-target frames corresponding to each target object through an NMS algorithm; and reserving a corresponding quasi-target frame for each target object according to the intersection ratio.
In another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method for identifying an object in a video as described above.
The invention has the beneficial effects that:
the invention provides a method for identifying objects in a video, which comprises the steps of adding category screening between a screening area and a duplication elimination technology, namely selecting all quasi-target central points corresponding to target object categories needing to be reserved in a current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object category to obtain a plurality of quasi-target frames; the uninteresting species can be screened out, the calculated amount of NMS is reduced, the data processing speed is improved, and the time is saved; furthermore, the types can be added or deleted in the video playing process, and the identification types can be dynamically changed.
Drawings
Fig. 1 is a flowchart of a method for identifying an object in a video according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an object recognition system in a video according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for identifying an object in a video according to an embodiment of the present invention, where as shown in fig. 1, the method includes:
s101, dividing a current frame in a target video into a plurality of areas; respectively identifying and anchoring each area, and obtaining a plurality of anchored borders in each area; each frame corresponds to a probability value and a central point, and each central point corresponds to an object type;
for example, an adult, a child, or a cat is identified in a video, and the target video includes categories of people, cars, cats, trees, and so on. Firstly, dividing a current frame in a target video into 19 regions by lines, respectively identifying and anchoring each region, obtaining 5 anchored frames with different sizes in each region, wherein each frame corresponds to a probability value and a central point, each central point corresponds to an object type, and the total number of the central points is 80.
S103, selecting all quasi-target center points corresponding to the target object types to be reserved in the current frame, and performing frame selection again according to each quasi-target center point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames;
selecting all quasi-target center points corresponding to target object types (people and cats) to be reserved in the current frame, and performing frame selection again according to the preset length and width of each quasi-target center point and the corresponding object types (people and cats) to obtain a plurality of quasi-target frames, wherein the length and width of the plurality of quasi-target frames may be exactly the same as the height and width of the people or cats, and may also exceed the height and width of the people or cats. It should be noted that the preset lengths and widths of the adult and the child are different.
And S104, removing the duplication of the plurality of quasi-target frames to obtain the target object frame.
And (4) removing the weight of the plurality of target frames of the big person and the small person selected from the frames to obtain the target object frame. In the invention, by adding the category screening before the duplicate removal technology, the uninteresting categories can be screened out, the calculation amount of NMS is reduced, the data processing speed is improved, and the time is saved.
In an alternative embodiment, the object type of the current frame is the same as or different from the object type of the previous frame.
In the video playing process, the object types can be dynamically increased or deleted, the identification types can be dynamically changed, for example, except for identifying adults, children and cats, automobiles and bicycles can be additionally identified, or only adults and children are identified, and cats can be deleted.
In an alternative embodiment, after S101, the method includes: s102, judging whether the probability values corresponding to all the frames in each area are smaller than a preset probability value, if so, deleting the area corresponding to the probability value, otherwise, reserving the area.
In the invention, the preset probability value is set to be 0.6, if so, the region corresponding to the probability value is deleted, thereby reducing the number of the regions, being convenient for reducing the calculated amount during the subsequent type screening and improving the data processing speed.
In an optional embodiment, the de-duplication is performed on the plurality of quasi target frames to obtain the target object frame. The method comprises the following steps: and reserving a corresponding quasi-target frame for each target object to obtain a plurality of different target object frames.
For example, the person's quasi-target center point can only identify the person at the belly part, and there are multiple quasi-target center points at the belly part, so the adult has multiple quasi-target center points, and the corresponding meeting frame selects multiple quasi-target frames, at this time, only one quasi-target frame needs to be reserved, and the rest quasi-target frames are deleted; similarly, the child also has a plurality of quasi-target central points, and a plurality of quasi-target frames are selected from the corresponding meeting frames, at the moment, only one quasi-target frame needs to be reserved, and the rest quasi-target frames are deleted; similarly, the cat can be identified only at the head by the quasi-target central point of the cat, the quasi-target central point of the head is multiple, the cat also has multiple quasi-target central points, multiple quasi-target frames are selected by the corresponding meeting frames, and at the moment, only one quasi-target frame needs to be reserved, and the rest quasi-target frames are deleted.
In an alternative embodiment, said each of said target objects retaining a corresponding one of said targeting boxes comprises: obtaining intersection ratio values of all the quasi-target frames corresponding to each target object through an NMS algorithm; and reserving a corresponding quasi-target frame for each target object according to the intersection ratio.
For example, a cat has multiple quasi-target center points, and correspondingly, there are multiple quasi-target frames, and one quasi-target frame is retained by the following process: 1. building a set H for storing candidate frames to be processed, and initializing the set H to include all quasi-target frames; building a set M for storing the optimal frames, and initializing the set M into an empty set; 2. sorting all frames in the set H, selecting a frame M with the highest score, and moving the frame M from the set H to the set M; 3. traversing the frames in the set H, respectively calculating intersection ratios with the frames m, if the intersection ratios are higher than a preset threshold (in the invention, the preset threshold is set to be 0.6), considering that the frames are overlapped with the frames m, and removing the frames from the set H; 4. go back to step 2 iteration until set H is empty. And the frame in the set M is the quasi-target frame. By the method, the target object frame of the complete frame selection cat can be selected.
On the other hand, the present invention provides a system for identifying an object in a video, and fig. 2 is a schematic structural diagram of the system for identifying an object in a video according to an embodiment of the present invention, as shown in fig. 2, including:
a dividing unit 201, configured to divide a current frame in a target video into a plurality of regions; respectively identifying and anchoring each area, and obtaining a plurality of anchored borders in each area; each frame corresponds to a probability value and a central point, and each central point corresponds to an object type;
for example, an adult, a child, or a cat is identified in a video, and the target video includes categories of people, cars, cats, trees, and so on. Firstly, dividing a current frame in a target video into 19 regions by lines, respectively identifying and anchoring each region, obtaining 5 anchored frames with different sizes in each region, wherein each frame corresponds to a probability value and a central point, each central point corresponds to an object type, and the total number of the central points is 80.
A framing unit 203, configured to select all quasi-target center points corresponding to target object types that need to be retained in the current frame, and perform framing again according to each quasi-target center point and a preset length and width of the corresponding object type to obtain a plurality of quasi-target frames;
selecting all quasi-target center points corresponding to target object types (people and cats) to be reserved in the current frame, and performing frame selection again according to the preset length and width of each quasi-target center point and the corresponding object types (people and cats) to obtain a plurality of quasi-target frames, wherein the length and width of the plurality of quasi-target frames may be exactly the same as the height and width of the people or cats, and may also exceed the height and width of the people or cats. It should be noted that the preset lengths and widths of the adult and the child are different.
In the video playing process, the object types can be dynamically increased or deleted, the identification types can be dynamically changed, for example, except for identifying adults, children and cats, automobiles and bicycles can be additionally identified, or only adults and children are identified, and cats can be deleted.
The duplication removing unit 204 is configured to duplicate the multiple quasi-target frames to obtain target object frames.
And (4) removing the weight of the plurality of target frames of the big person and the small person selected from the frames to obtain the target object frame. In the invention, by adding the category screening before the duplicate removal technology, the uninteresting categories can be screened out, the calculation amount of NMS is reduced, the data processing speed is improved, and the time is saved.
In an optional embodiment, further comprising: and the screening unit 202 is configured to determine whether the probability values corresponding to all the borders in each region are smaller than a preset probability value, delete the region corresponding to the probability value if the probability values are smaller than the preset probability value, and otherwise, reserve the region.
In the invention, the preset probability value is set to be 0.6, if so, the region corresponding to the probability value is deleted, thereby reducing the number of the regions, being convenient for reducing the calculated amount during the subsequent type screening and improving the data processing speed.
In an alternative embodiment, the deduplication unit comprises: and the reservation module is used for reserving the corresponding quasi-target frame for each target object to obtain a plurality of different target object frames.
For example, the person's quasi-target center point can only identify the person at the belly part, and there are multiple quasi-target center points at the belly part, so the adult has multiple quasi-target center points, and the corresponding meeting frame selects multiple quasi-target frames, at this time, only one quasi-target frame needs to be reserved, and the rest quasi-target frames are deleted; similarly, the child also has a plurality of quasi-target central points, and a plurality of quasi-target frames are selected from the corresponding meeting frames, at the moment, only one quasi-target frame needs to be reserved, and the rest quasi-target frames are deleted; similarly, the cat can be identified only at the head by the quasi-target central point of the cat, the quasi-target central point of the head is multiple, the cat also has multiple quasi-target central points, multiple quasi-target frames are selected by the corresponding meeting frames, and at the moment, only one quasi-target frame needs to be reserved, and the rest quasi-target frames are deleted.
In an alternative embodiment, the reservation module comprises: the calculation module is used for obtaining intersection ratio values of all the quasi-target frames corresponding to each target object through an NMS algorithm; and reserving a corresponding quasi-target frame for each target object according to the intersection ratio.
For example, a cat has multiple quasi-target center points, and correspondingly, there are multiple quasi-target frames, and one quasi-target frame is retained by the following process: 1. building a set H for storing candidate frames to be processed, and initializing the set H to include all quasi-target frames; building a set M for storing the optimal frames, and initializing the set M into an empty set; 2. sorting all frames in the set H, selecting a frame M with the highest score, and moving the frame M from the set H to the set M; 3. traversing the frames in the set H, respectively calculating intersection ratios with the frames m, if the intersection ratios are higher than a preset threshold (in the invention, the preset threshold is set to be 0.6), considering that the frames are overlapped with the frames m, and removing the frames from the set H; 4. go back to step 2 iteration until set H is empty. And the frame in the set M is the quasi-target frame. By the method, the target object frame of the complete frame selection cat can be selected.
In another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method for identifying an object in a video as described above.
The storage medium stores the software, and the storage medium includes but is not limited to: optical disks, floppy disks, hard disks, erasable memory, etc.
The invention has the beneficial effects that:
the invention provides a method for identifying objects in a video, which comprises the steps of adding category screening between a screening area and a duplication elimination technology, namely selecting all quasi-target central points corresponding to target object categories needing to be reserved in a current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object category to obtain a plurality of quasi-target frames; the uninteresting species can be screened out, the calculated amount of NMS is reduced, the data processing speed is improved, and the time is saved; furthermore, the types can be added or deleted in the video playing process, and the identification types can be dynamically changed.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for identifying an object in a video is characterized by comprising the following steps:
dividing a current frame in a target video into a plurality of areas; respectively identifying and anchoring each area, and obtaining a plurality of anchored borders in each area; each frame corresponds to a probability value and a central point, and each central point corresponds to an object type;
selecting all quasi-target central points corresponding to target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames;
and removing the duplication of the plurality of quasi-target frames to obtain the target object frame.
2. The method of claim 1, wherein:
the target type of the current frame is the same as or different from the target type of the previous frame.
3. The method of claim 1, wherein the current frame in the target video is divided into a plurality of regions; identifying and anchoring each region respectively, wherein after obtaining a plurality of anchored borders in each region, the method comprises the following steps:
and judging whether the probability values corresponding to all the frames in each region are smaller than a preset probability value, if so, deleting the region corresponding to the probability value, otherwise, reserving the region.
4. The method of claim 1, wherein the de-duplicating of the plurality of quasi-target frames results in a target object frame. The method comprises the following steps:
and reserving a corresponding quasi-target frame for each target object to obtain a plurality of different target object frames.
5. The method of claim 4, wherein said each of said target objects retaining a corresponding one of said quasi-target frames comprises:
obtaining intersection ratio values of all the quasi-target frames corresponding to each target object through an NMS algorithm; and reserving a corresponding quasi-target frame for each target object according to the intersection ratio.
6. A system for identifying objects in a video, comprising:
the dividing unit is used for dividing the current frame in the target video into a plurality of areas; respectively identifying and anchoring each area, and obtaining a plurality of anchored borders in each area; each frame corresponds to a probability value and a central point, and each central point corresponds to an object type;
the frame selection unit is used for selecting all quasi-target central points corresponding to the target object types needing to be reserved in the current frame, and performing frame selection again according to each quasi-target central point and the preset length and width of the corresponding object type to obtain a plurality of quasi-target frames;
and the duplication removing unit is used for removing duplication of the plurality of quasi-target frames to obtain the target object frame.
7. The system of claim 6, further comprising:
and the screening unit is used for judging whether the probability values corresponding to all the frames in each region are smaller than a preset probability value, deleting the region corresponding to the probability value if the probability values are smaller than the preset probability value, and otherwise, reserving the regions.
8. The system of claim 6, wherein the deduplication unit comprises:
and the reservation module is used for reserving the corresponding quasi-target frame for each target object to obtain a plurality of different target object frames.
9. The system of claim 8, wherein the reservation module comprises:
the calculation module is used for obtaining intersection ratio values of all the quasi-target frames corresponding to each target object through an NMS algorithm; and reserving a corresponding quasi-target frame for each target object according to the intersection ratio.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for object recognition in video according to any one of claims 1 to 5.
CN202110481032.8A 2021-04-30 2021-04-30 Method and system for identifying object in video and computer storage medium Pending CN113283307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110481032.8A CN113283307A (en) 2021-04-30 2021-04-30 Method and system for identifying object in video and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110481032.8A CN113283307A (en) 2021-04-30 2021-04-30 Method and system for identifying object in video and computer storage medium

Publications (1)

Publication Number Publication Date
CN113283307A true CN113283307A (en) 2021-08-20

Family

ID=77277850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110481032.8A Pending CN113283307A (en) 2021-04-30 2021-04-30 Method and system for identifying object in video and computer storage medium

Country Status (1)

Country Link
CN (1) CN113283307A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101825459B1 (en) * 2016-08-05 2018-03-22 재단법인대구경북과학기술원 Multi-class objects detection apparatus and method thereof
CN110598764A (en) * 2019-08-28 2019-12-20 杭州飞步科技有限公司 Training method and device of target detection model and electronic equipment
WO2020134528A1 (en) * 2018-12-29 2020-07-02 深圳云天励飞技术有限公司 Target detection method and related product
CN111612002A (en) * 2020-06-04 2020-09-01 广州市锲致智能技术有限公司 Multi-target object motion tracking method based on neural network
CN112055172A (en) * 2020-08-19 2020-12-08 浙江大华技术股份有限公司 Method and device for processing monitoring video and storage medium
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101825459B1 (en) * 2016-08-05 2018-03-22 재단법인대구경북과학기술원 Multi-class objects detection apparatus and method thereof
WO2020134528A1 (en) * 2018-12-29 2020-07-02 深圳云天励飞技术有限公司 Target detection method and related product
CN110598764A (en) * 2019-08-28 2019-12-20 杭州飞步科技有限公司 Training method and device of target detection model and electronic equipment
CN111612002A (en) * 2020-06-04 2020-09-01 广州市锲致智能技术有限公司 Multi-target object motion tracking method based on neural network
CN112055172A (en) * 2020-08-19 2020-12-08 浙江大华技术股份有限公司 Method and device for processing monitoring video and storage medium
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹其新等: "轮式自主移动机器人", 29 February 2012, 上海交通大学出版社, pages: 201 *

Similar Documents

Publication Publication Date Title
CN110427884B (en) Method, device, equipment and storage medium for identifying document chapter structure
CN110263628B (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN111967971B (en) Bank customer data processing method and device
CN113094183B (en) Training task creating method, device, system and medium of AI (Artificial Intelligence) training platform
CN106780579A (en) A kind of ultra-large image characteristic point matching method and system
CN112116806A (en) Traffic flow characteristic extraction method and system
CN110689440A (en) Vehicle insurance claim settlement identification method and device based on image identification, computer equipment and storage medium
CN113408561A (en) Model generation method, target detection method, device, equipment and storage medium
CN112085644A (en) Multi-column data sorting method and device, readable storage medium and electronic equipment
CN111209106B (en) Flow chart dividing method and system based on caching mechanism
CN113283307A (en) Method and system for identifying object in video and computer storage medium
CN106874255A (en) Method and device for rule matching
CN112307860A (en) Image recognition model training method and device and image recognition method and device
CN112650449A (en) Release method and release system of cache space, electronic device and storage medium
CN109165325B (en) Method, apparatus, device and computer-readable storage medium for segmenting graph data
CN109491611B (en) Metadata dropping method, device and equipment
JP6958618B2 (en) Information processing equipment, information processing methods, and programs
CN111695389A (en) Lane line clustering method and device
KR102529335B1 (en) Method for On-device Artificial Intelligence support based on Artificial Intelligence chip connection
CN115662267A (en) Map simplifying method, map simplifying device, storage medium and equipment
CN115185685A (en) Artificial intelligence task scheduling method and device based on deep learning and storage medium
CN108198413A (en) Blocking method is delayed in the intelligent transportation of a kind of big data and autonomous deep learning
CN110175296B (en) Node recommendation method and server in network graph and storage medium
CN108197536B (en) Image processing method and device, computer device and readable storage medium
CN110704241B (en) Method, device, equipment and medium for recovering file metadata

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination