WO2023024098A1 - 生成知识图谱的方法、装置和计算机可读介质 - Google Patents
生成知识图谱的方法、装置和计算机可读介质 Download PDFInfo
- Publication number
- WO2023024098A1 WO2023024098A1 PCT/CN2021/115120 CN2021115120W WO2023024098A1 WO 2023024098 A1 WO2023024098 A1 WO 2023024098A1 CN 2021115120 W CN2021115120 W CN 2021115120W WO 2023024098 A1 WO2023024098 A1 WO 2023024098A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target system
- knowledge graph
- relationship
- objects
- picture
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000008569 process Effects 0.000 claims description 15
- 238000004519 manufacturing process Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5854—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
Definitions
- the embodiments of the present invention relate to the field of computer technology, and in particular, to a method, device and computer-readable medium for generating a knowledge map.
- Embodiments of the present invention provide a method, device, and computer-readable medium for generating a knowledge map, which can quickly obtain accurate information about objects and their relationships in a target system, so as to automatically generate a knowledge map.
- a method for generating a knowledge graph is provided.
- a picture of the target system is obtained; target recognition is performed on the picture to obtain the category of each object in the target system and the position information of each object in the picture; according to the position information of each object in the picture, The location information determines the relative positional relationship among the objects in the target system; the knowledge graph of the target system is generated according to the relative positional relationship and the identified categories of the objects.
- an apparatus including a module for executing each step in the method provided in the first aspect.
- an apparatus including: at least one memory configured to store computer-readable codes; at least one processor configured to call the computer-readable codes to execute the steps in the method provided in the first aspect .
- a computer-readable medium where computer-readable code is stored on the computer-readable medium, and when the computer-readable code is executed by a processor, the processor executes the method provided in the first aspect. steps in the method.
- a computer program product including computer readable codes, and when the computer readable codes are executed by a processor, each step in the method provided in the first aspect is implemented.
- the knowledge map is automatically generated based on computer vision technology, wherein the object category in the target system and the relative positional relationship between objects are determined through target recognition, and the knowledge map is automatically generated based on this, with accurate information acquisition and generation process Efficient advantages.
- the categories of the identified objects are the knowledge of the target system Entities in the map; determine the relationship between the entities in the knowledge map corresponding to the objects in the target system according to the relative positional relationship between the objects; according to the entities in the knowledge map of the target system and the entities The relationship between generates the knowledge graph of the target system.
- the category of the object obtained by target recognition is used as the entity in the knowledge map, and the relationship between the entities in the knowledge map is determined according to the relative position relationship between objects, and the results of the target system are skillfully applied to the generation of the knowledge map. , using the method of object recognition to solve the problem of natural language processing.
- the common sense of the mutual relationship between the identified objects can also be obtained; according to the relative positional relationship between the objects, determine each entity in the knowledge map corresponding to each object in the target system The relationship between entities, including: determining the relationship between entities in the knowledge map according to the relative positional relationship between the identified objects and the acquired common sense. Among them, common sense is combined to further determine the relationship between entities, making the information content of the generated knowledge map more accurate.
- the target system is a factory, a production line or a process.
- the target system is not limited to the industrial field, but can also be a system in other technical fields.
- FIG. 1 is a schematic structural diagram of an apparatus for generating a knowledge map provided by an embodiment of the present invention.
- FIG. 2 is a flow chart of a method for generating a knowledge map provided by an embodiment of the present invention.
- Fig. 3 shows the process of generating a knowledge map in one embodiment.
- Fig. 4 shows a process of performing target recognition on a picture of a target system in an embodiment.
- Fig. 5 shows the process of determining the relationship among entities in the knowledge map of the target system in one embodiment.
- Image acquisition module 112 Target recognition module 113: Position relationship determination module
- the term “comprising” and its variants represent open terms meaning “including but not limited to”.
- the term “based on” means “based at least in part on”.
- the terms “one embodiment” and “an embodiment” mean “at least one embodiment.”
- the term “another embodiment” means “at least one other embodiment.”
- the terms “first”, “second”, etc. may refer to different or the same object. The following may include other definitions, either express or implied. Unless the context clearly indicates otherwise, the definition of a term is consistent throughout the specification.
- FIG. 1 is a schematic structural diagram of an apparatus for generating a knowledge map provided by an embodiment of the present invention.
- the device 10 for generating a knowledge graph can be implemented as a network of computer processors to execute the method 200 for generating a knowledge graph in the embodiment of the present invention, or it can also be a single computer, a single-chip microcomputer or a processing chip as shown in FIG. 1 , including At least one memory 101 comprising computer readable media such as random access memory (RAM).
- Apparatus 10 also includes at least one processor 102 coupled with at least one memory 101 .
- Computer executable code is stored in at least one memory 101 and, when executed by at least one processor 102, causes at least one processor 102 to perform the steps described herein.
- At least one memory 101 shown in FIG. 1 may contain a program 11 for generating a knowledge graph, so that at least one processor 102 executes the method 200 for generating a knowledge graph described in the embodiment of the present invention.
- the program 11 for generating a knowledge graph may include: an image acquisition module 111 , an object recognition module 112 , a location relationship determination module 113 and a knowledge graph generation module 114 . As shown in Figure 3, the operations performed by each module are as follows:
- the picture acquiring module 111 configured to acquire a picture 20 of the target system; wherein, the target system can be a factory, a production line or a process, and any part of the industrial system. Of course, the target system can also be other systems or components other than the industrial system.
- the target recognition module 112 configured to perform target recognition on the picture 20 to obtain the category 31 of each object in the target system and the position information 32 of each object in the picture 20 .
- the relative positional relationship 42 includes but not limited to: angle information between objects, the distance between the center points of the identified candidate areas (bounding boxes) of each object, and the positions of the identified candidate areas of each object The relative relationship of the points, the size of the overlapping area between the candidate areas, the relative positional relationship between the overlapping areas, etc. Due to the different angles of the lens shooting target system, the proportional relationship between objects may not match the actual situation.
- the relative position relationship 42 includes various position information, so that feature variables can be extracted according to different business requirements, and finally get Relatively accurate relationships between objects.
- a knowledge graph generating module 114 configured to generate a knowledge graph 50 of the target system according to the relative positional relationship 42 and the recognized category 31 of each object.
- the knowledge graph generating module 114 may include: an entity relationship determining unit 1142 and a knowledge graph generating unit 1141 .
- the entity relationship determination unit 1142 can determine the relationship 42' between entities in the knowledge graph corresponding to each object in the target system according to the relative position relationship 42 between the objects
- the knowledge graph generation unit 1141 can determine the identified
- the object category 31 is each entity in the knowledge graph of the target system
- the knowledge graph 50 of the target system is generated according to each entity in the knowledge graph of the target system and the relationship 42 ′ between the entities.
- the device 10 may further include a commonsense acquisition module 115 configured to acquire the commonsense 60 of the mutual relationship between identified objects as an input of the entity relationship determination unit 1142 .
- the entity relationship determination unit 1142 can determine the relationship 42' between entities in the knowledge graph according to the identified relative positional relationship 42 between objects and common sense 60.
- the category 31 of each object can also be used as an input to help determine the relationship between objects.
- entity recognition and knowledge map generation are no longer regarded as two independent issues from a traditional perspective.
- the process of generating a knowledge graph the relationship between entities is obtained while acquiring entity information, thereby automatically generating a knowledge graph.
- object recognition techniques in computer vision are used to identify entities in pictures or videos, and then the relationship between entities is determined based on the relative positional relationship between detected objects in the picture, optionally also using common sense To further determine the relationship between entities.
- a table including the entity pair and the relationship between the entities in the entity pair can be generated, and finally a knowledge map is automatically generated based on the table.
- only pictures need to be input, which skillfully avoids the difficulty of extracting entity information from different data sources. The whole process can be completed automatically, which significantly reduces human participation, greatly improves the efficiency of knowledge map generation, and reduces economic costs.
- the target recognition module 112 in an embodiment of the present invention will be described below with reference to FIG. 4 .
- the target system is a production line in the automobile manufacturing workshop, wherein, you can choose to speed up-region RCNN (Faster Regions with CNN features, Faster-RCNN) with CNN features as the target recognition model used by the target recognition module 112 .
- RCNN Faster Regions with CNN features, Faster-RCNN
- Image 20 includes a car, two tires, a car logo and a mechanical arm.
- Faster-RCNN can include four parts.
- the convolutional neural network (Convolutional Neural Networks, CNN) is used to extract features from the picture 20, its input is the entire picture 20, and the output is the extracted feature, that is, the feature map (feature map) shown in Figure 4, and the output Including multiple candidate regions (ie bounding boxes).
- Region of interest pooling (Region of interest pooling, ROI pooling) is used to convert bounding boxes of different sizes into the same size, that is, to unify the image size and output, which is beneficial to subsequent classification tasks.
- the other part is classification and regression, which classifies each bounding box and can give the final position information.
- a complete object recognition model requires the mutual coordination of the above components. For the trained model, it only needs to input some pictures to quickly identify all objects on the production line. Using an object recognition model avoids manual entry of entities to generate a knowledge graph. It should be noted that Quicken-RCNN is only an example of a target recognition model, and other CNN-based models can also be used to recognize objects and location information in the picture 20 .
- the relationship between entities can be obtained from the relative positional relationship between objects.
- both the tire and the car logo are within the location of the car, which means that the car can include the tire and the car logo, and the entity "car” has the attributes "tyre” and "car logo". Therefore, it is feasible to obtain the corresponding relationship between entities from the relative positional relationship between objects.
- location information is important but prior knowledge about images is also informative, here called “common sense”. Therefore, in some embodiments of the present invention, the relative positional relationship between objects and common sense can be used to jointly identify the relationship between entities.
- the entity relationship determining unit 1142 can perform classification based on sufficient core features.
- An example of the entity relationship determination unit 1142 is an artificial neural network (Artificial Neural Network, ANN), which is used to identify the relationship between the entities corresponding to the objects in the picture 20.
- the input of the ANN includes the relative position relationship 42 between objects in the picture, and the possible relationship between two entities from common sense 60, and the output is the relationship 42' between the entities.
- ANN Artificial Neural Network
- the positional relationship determining module 113 performs feature extraction on the positional information 32 of each object in the picture 20, and obtains the relative positional relationship 42 between the objects in the target system, and the extracted feature of the relative positional relationship 42 includes at least one of the following information one item:
- the entity relationship determining unit 1142 can infer the relationship between entities through the entity relationship prediction model (ie, the aforementioned ANN).
- the entity relationship prediction model ie, the aforementioned ANN.
- the relationship between the car and the tire is that the car includes the tire
- the relationship between the car and the robotic arm is that the robotic arm grabs the car
- the two tires are the same entity.
- a table including entity A, entity B and the relationship between entities can be obtained, and the table can be output by the entity relationship determining unit 1142 .
- the script can be triggered to automatically connect to the neo4j database and generate a knowledge graph 50 based on the table.
- the picture 20 is taken as an example here, but in actual processing, the embodiment of the present invention can process the video, because the video can be regarded as composed of frames of pictures.
- the above-mentioned modules included in the program 11 of the knowledge map can also be implemented by hardware, and the device 10 for executing the knowledge map generation is performing various operations of the method for generating the knowledge map, such as pre-setting the control logic of each process involved in the method Burn into such as Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) chip or Complex Programmable Logic Device (Complex Programmable Logic Device, CPLD), and these chips or devices perform the functions of the above-mentioned modules.
- the method can be determined according to engineering practice.
- the apparatus 10 for generating a knowledge graph may further include a communication module 103 for communicating with other devices.
- the embodiments of the present invention may include devices having architectures different from those shown in FIG. 1 .
- the above architecture is only exemplary, and is used to explain the method 200 for generating a knowledge map provided by the embodiment of the present invention.
- At least one processor 102 may include a microprocessor, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), a state machine, and the like.
- ASIC application specific integrated circuit
- DSP digital signal processor
- CPU central processing unit
- GPU graphics processing unit
- Examples of computer readable media include, but are not limited to, floppy disks, CD-ROMs, magnetic disks, memory chips, ROM, RAM, ASICs, configured processors, all-optical media, magnetic tape or other magnetic media, or from which a computer processor can Any other medium that reads the code.
- various other forms of computer-readable media can transmit or carry the code to the computer, including routers, private or public networks, or other wired and wireless transmission devices or channels.
- Code may include code in any computer programming language, including C, C++, C++, Visual Basic, java, and JavaScript.
- FIG. 2 is a flow chart of a method for generating a knowledge map provided by an embodiment of the present invention.
- the method 200 may be executed by the aforementioned device 10 for generating a knowledge graph, and may include the following steps:
- -S202 Perform target recognition on the picture 20 to obtain the category 31 of each object in the target system and the position information 32 of each object in the picture 20;
- step S204 may include:
- the method may include:
- Embodiments of the present invention further provides a computer-readable medium, on which computer-readable codes are stored; and a computer program product, including the computer-readable codes.
- the computer readable code implements the method 200 when executed by a processor.
- Examples of computer readable media include floppy disks, hard disks, magneto-optical disks, optical disks (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), magnetic tape, non- Volatile memory card and ROM.
- the computer readable codes can be downloaded from a server computer or cloud by a communication network.
Landscapes
- Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
Description
特征 | 定义 |
重叠率 | 重叠区域占总候选区域的比率 |
相对位置-L(左) | 实体A在实体B的左侧 |
相对位置-R(右) | 实体A在实体B的右侧 |
相对位置-U(上) | 实体A在实体B的前面 |
相对位置-D(下) | 实体A在实体B的后面 |
相对距离 | 实体A的中心与实体B的中心之间的距离 |
Claims (11)
- 一种生成知识图谱的方法(200),其特征在于,包括:-获取(S201)目标系统的一张图片(20);-对所述图片(20)进行(S202)目标识别得到所述目标系统中各物体的类别(31)以及各物体在所述图片(20)中的位置信息(32);-根据各物体在所述图片中的位置信息(32)确定(S203)所述目标系统中各物体之间的相对位置关系(42);-根据所述相对位置关系(42)以及识别出的各物体的类别(31)生成(S204)所述目标系统的知识图谱(50)。
- 如权利要求1所述的方法,其特征在于,根据所述相对位置关系(42)以及识别出的各物体的类别(31)生成(S204)所述目标系统的知识图谱,包括:-确定(S2041)识别出的各物体的类别(31)为所述目标系统的知识图谱中的各实体;-根据各物体之间的相对位置关系(42)确定(S2042)所述目标系统中各物体所对应知识图谱中各实体之间的关系(42’);-根据所述目标系统的知识图谱中的各实体以及各实体之间的关系生成(S2043)所述目标系统的知识图谱(50)。
- 如权利要求1所述的方法,其特征在于,-所述方法还包括:获取(S205)识别出的各物体之间的相互关系的常识(60);-根据各物体之间的相对位置关系确定(S2042)所述目标系统中各物体所对应知识图谱中各实体之间的关系,包括:根据识别出的各物体之间的相对位置关系(42)以及获取的常识(60)确定知识图谱中各实体之间的关系(42’)。
- 如权利要求1所述的方法,所述目标系统为一个工厂、一条生产线或一道工序。
- 一种生成知识图谱的装置(10),其特征在于,包括:-图片获取模块(111),被配置为获取目标系统的一张图片(20);-目标识别模块(112),被配置为对所述图片(20)进行目标识别得到所述目标系统中各物体的类别(31)以及各物体在所述图片(20)中的位置信息(32);-位置关系确定模块(113),被配置为根据各物体在所述图片中的位置信息(32)确定所述目标系统中各物体之间的相对位置关系(42);-知识图谱生成模块(114),被配置为根据所述相对位置关系(42)以及识别出的各物体的类别(31)生成所述目标系统的知识图谱(50)。
- 如权利要求5所述的装置(10),其特征在于,所述知识图谱生成模块(114),包括:实体关系确定单元(1142)和知识图谱生成单元(1141),-所述实体关系确定单元(1142),被配置为:根据各物体之间的相对位置关系(42)确定所述目标系统中各物体所对应知识图谱中各实体之间的关系(42’);-所述知识图谱生成单元(1141),被配置为:确定识别出的各物体的类别(31)为所述目标系统的知识图谱中的各实体;以及根据所述目标系统的知识图谱中的各实体以及各实体之间的关系(42’)生成所述目标系统的知识图谱(50)。
- 如权利要求5所述的装置(10),其特征在于,-还包括:常识获取模块(115),被配置为获取识别出的各物体之间的相互关系的常识(60);-所述实体关系确定单元(1142),被具体配置为:根据识别出的各物体之间的相对位置关系(42)以及获取的常识(60)确定知识图谱中各实体之间的关系(42’)。
- 如权利要求5所述的装置(10),所述目标系统为一个工厂、一条生产线或一道工序。
- 一种生成知识图谱的装置(10),其特征在于,包括:至少一个存储器(101),被配置为存储计算机可读代码;至少一个处理器(102),被配置为调用所述计算机可读代码,执行如权利要求1~4中任一项所述方法的步骤。
- 一种计算机可读介质,其特征在于,所述计算机可读介质上存储有计算机可读代码,所述计算机可读代码在被处理器执行时,使所述处理器执行如权利要求1~4任一项所述方法的步骤。
- 一种计算机程序产品,包括计算机可读代码,其特征在于,所述计算机可读代码被处理器执行时实现如权利要求1~4任一项所述方法的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21954621.5A EP4372582A1 (en) | 2021-08-27 | 2021-08-27 | Knowledge graph generation method and apparatus and computer readable medium |
PCT/CN2021/115120 WO2023024098A1 (zh) | 2021-08-27 | 2021-08-27 | 生成知识图谱的方法、装置和计算机可读介质 |
CN202180098441.8A CN117396861A (zh) | 2021-08-27 | 2021-08-27 | 生成知识图谱的方法、装置和计算机可读介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/115120 WO2023024098A1 (zh) | 2021-08-27 | 2021-08-27 | 生成知识图谱的方法、装置和计算机可读介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023024098A1 true WO2023024098A1 (zh) | 2023-03-02 |
Family
ID=85322409
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/115120 WO2023024098A1 (zh) | 2021-08-27 | 2021-08-27 | 生成知识图谱的方法、装置和计算机可读介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4372582A1 (zh) |
CN (1) | CN117396861A (zh) |
WO (1) | WO2023024098A1 (zh) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160203137A1 (en) * | 2014-12-17 | 2016-07-14 | InSnap, Inc. | Imputing knowledge graph attributes to digital multimedia based on image and video metadata |
CN106355627A (zh) * | 2015-07-16 | 2017-01-25 | 中国石油化工股份有限公司 | 一种用于生成知识图谱的方法及系统 |
CN109635121A (zh) * | 2018-11-07 | 2019-04-16 | 平安科技(深圳)有限公司 | 医疗知识图谱创建方法及相关装置 |
CN110457403A (zh) * | 2019-08-12 | 2019-11-15 | 南京星火技术有限公司 | 图网络决策系统、方法及知识图谱的构建方法 |
CN110598021A (zh) * | 2018-05-25 | 2019-12-20 | 阿里巴巴集团控股有限公司 | 获取图片的知识图谱的方法、装置和系统 |
US20200233899A1 (en) * | 2019-01-17 | 2020-07-23 | International Business Machines Corporation | Image-based ontology refinement |
CN111967367A (zh) * | 2020-08-12 | 2020-11-20 | 维沃移动通信有限公司 | 图像内容提取方法、装置及电子设备 |
CN112069326A (zh) * | 2020-09-03 | 2020-12-11 | Oppo广东移动通信有限公司 | 知识图谱的构建方法、装置、电子设备及存储介质 |
-
2021
- 2021-08-27 WO PCT/CN2021/115120 patent/WO2023024098A1/zh active Application Filing
- 2021-08-27 CN CN202180098441.8A patent/CN117396861A/zh active Pending
- 2021-08-27 EP EP21954621.5A patent/EP4372582A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160203137A1 (en) * | 2014-12-17 | 2016-07-14 | InSnap, Inc. | Imputing knowledge graph attributes to digital multimedia based on image and video metadata |
CN106355627A (zh) * | 2015-07-16 | 2017-01-25 | 中国石油化工股份有限公司 | 一种用于生成知识图谱的方法及系统 |
CN110598021A (zh) * | 2018-05-25 | 2019-12-20 | 阿里巴巴集团控股有限公司 | 获取图片的知识图谱的方法、装置和系统 |
CN109635121A (zh) * | 2018-11-07 | 2019-04-16 | 平安科技(深圳)有限公司 | 医疗知识图谱创建方法及相关装置 |
US20200233899A1 (en) * | 2019-01-17 | 2020-07-23 | International Business Machines Corporation | Image-based ontology refinement |
CN110457403A (zh) * | 2019-08-12 | 2019-11-15 | 南京星火技术有限公司 | 图网络决策系统、方法及知识图谱的构建方法 |
CN111967367A (zh) * | 2020-08-12 | 2020-11-20 | 维沃移动通信有限公司 | 图像内容提取方法、装置及电子设备 |
CN112069326A (zh) * | 2020-09-03 | 2020-12-11 | Oppo广东移动通信有限公司 | 知识图谱的构建方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN117396861A (zh) | 2024-01-12 |
EP4372582A1 (en) | 2024-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021022970A1 (zh) | 一种基于多层随机森林的零部件识别方法及系统 | |
CN111461245A (zh) | 一种融合点云和图像的轮式机器人语义建图方法及系统 | |
CN103208123B (zh) | 图像分割方法与系统 | |
JP2018200685A (ja) | 完全教師あり学習用のデータセットの形成 | |
CN110765922A (zh) | 一种agv用双目视觉物体检测障碍物系统 | |
CN106951830B (zh) | 一种基于先验条件约束的图像场景多对象标记方法 | |
CN105528588A (zh) | 一种车道线识别方法及装置 | |
CN112232293A (zh) | 图像处理模型训练、图像处理方法及相关设备 | |
CN110232379A (zh) | 一种车辆姿态检测方法及系统 | |
WO2021151277A1 (zh) | 目标物损伤程度判定方法、装置、电子设备及存储介质 | |
CN104992147A (zh) | 一种基于快慢结合云计算环境的深度学习的车牌识别方法 | |
TW201937405A (zh) | 物件標示系統及方法 | |
Yang et al. | Multi-view semantic learning network for point cloud based 3D object detection | |
EP4105600A2 (en) | Method for automatically producing map data, related apparatus and computer program product | |
CN112052855A (zh) | 一种基于深度学习的车牌识别方法及装置 | |
TW202121331A (zh) | 基於機器學習的物件辨識系統及其方法 | |
Lv et al. | A novel approach for detecting road based on two-stream fusion fully convolutional network | |
Mo et al. | Point-by-point feature extraction of artificial intelligence images based on the Internet of Things | |
CN114708566A (zh) | 一种基于改进YOLOv4的自动驾驶目标检测方法 | |
KR20230132350A (ko) | 연합 감지 모델 트레이닝, 연합 감지 방법, 장치, 설비 및 매체 | |
GB2572025A (en) | Urban environment labelling | |
Yang et al. | C-RPNs: Promoting object detection in real world via a cascade structure of Region Proposal Networks | |
WO2024093641A1 (zh) | 多模态融合的高精地图要素识别方法、装置、设备及介质 | |
Kong et al. | PASS3D: Precise and accelerated semantic segmentation for 3D point cloud | |
WO2023024098A1 (zh) | 生成知识图谱的方法、装置和计算机可读介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21954621 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180098441.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021954621 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021954621 Country of ref document: EP Effective date: 20240216 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |