CN112395917B - Region identification method and device, storage medium and electronic device - Google Patents

Region identification method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112395917B
CN112395917B CN201910755305.6A CN201910755305A CN112395917B CN 112395917 B CN112395917 B CN 112395917B CN 201910755305 A CN201910755305 A CN 201910755305A CN 112395917 B CN112395917 B CN 112395917B
Authority
CN
China
Prior art keywords
area
image information
type
target
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910755305.6A
Other languages
Chinese (zh)
Other versions
CN112395917A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninebot Beijing Technology Co Ltd
Original Assignee
Ninebot Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninebot Beijing Technology Co Ltd filed Critical Ninebot Beijing Technology Co Ltd
Priority to CN201910755305.6A priority Critical patent/CN112395917B/en
Publication of CN112395917A publication Critical patent/CN112395917A/en
Application granted granted Critical
Publication of CN112395917B publication Critical patent/CN112395917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The invention provides a method and a device for identifying an area, a storage medium and an electronic device, wherein the method comprises the steps of obtaining image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of target movable equipment; obtaining a region identification result through the image information to be identified based on the target neural network model; and setting the target movable equipment to not allow switching running between the first area and the second area under the condition that the area identification result indicates that the first area and the second area of the first area type and the second area of the second area type are included in the outdoor running scene and an abnormal edge exists at the junction of the first area and the second area, wherein the first area and the second area of the first area type are both passing areas allowing the target movable equipment to run. The invention solves the problem that the vehicle in the related art cannot adapt to different driving scenes, and achieves the effect that the vehicle can adapt to different scenes.

Description

Region identification method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of computers, and in particular, to a method and apparatus for identifying an area, a storage medium, and an electronic apparatus.
Background
In the prior art, position sensors are used to detect their own position, thereby helping outdoor smart mobile devices, such as outdoor vehicles/unmanned vehicles, to make subsequent driving strategies.
When multiple available areas occur simultaneously in an outdoor scene, such as sidewalks on both sides of a highway, the prior art has not been able to walk up the pavement from the highway (i.e., across the edges of anomalies between the highway and the pavement). Taking an outdoor vehicle as an example, the situation is very important for path planning of the outdoor vehicle, and particularly in some meeting scenes, the running efficiency and the safety problem are directly related.
In view of the above technical problems, no effective solution has been proposed in the related art.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying an area, a storage medium and an electronic device, which are used for at least solving the problem that vehicles in the related art cannot adapt to different driving scenes.
According to an embodiment of the present invention, there is provided a method for identifying an area, including: acquiring image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of target movable equipment; obtaining a region identification result through the image information to be identified based on the target neural network model; and setting the target movable equipment to not allow switching running between the first area and the second area when the area identification result indicates that the outdoor running scene comprises a first area of a first area type and a second area of a second area type and an abnormal edge exists at the junction of the first area and the second area, wherein the first area of the first area type and the second area of the second area type are traffic areas allowing the target movable equipment to run.
Optionally, after obtaining the region identification result through the image information to be identified based on the target neural network model, the method further includes: and setting the target movable device to allow switching running between the first region and the second region when the region identification result indicates that the first region of the first region type and the second region of the second region type are included in the outdoor running scene and that the boundary between the first region and the second region does not have the abnormal edge.
Optionally, the setting the target movable device to not allow the switching running between the first area and the second area includes: and setting the target movable device not to allow switching from the first area to the second area to run under the condition that the target movable device is currently running on the first area.
Optionally, the setting the target movable device to allow switching between the first area and the second area includes: setting the target movable device to allow switching from the first area to the second area for traveling in a case where the target movable device is currently traveling on the first area; after setting the target movable device to allow traveling on the second area while switching from the first area, the method further includes: acquiring a switching instruction; and controlling the target movable equipment to switch from the first area to the second area to run in response to the switching instruction.
Optionally, the acquiring the image information to be identified includes: acquiring the image information to be identified, which is obtained by shooting the outdoor driving scene by the camera equipment; the target neural network model is used for marking passing areas of different area types by using at least different colors; wherein, when the area recognition result indicates that the outdoor driving scene includes a first area of the first area type and a second area of the second area type, and an abnormal edge exists at a boundary between the first area and the second area, the area recognition result includes a first color mark for indicating the first area, a second color mark for indicating the second area, and a third color mark for indicating the abnormal edge.
Optionally, before the acquiring the image information to be identified, the method further includes: acquiring a group of sample image information, wherein the group of sample image information comprises first sample image information and second sample image information, the image information in the first sample image information is used for representing a first type of outdoor driving scene, the first type of outdoor driving scene comprises a plurality of adjacent passing areas with different area types and the boundary of which is provided with the abnormal edge, the image information in the second sample image information is used for representing a second type of outdoor driving scene, and the second type of outdoor driving scene comprises a plurality of adjacent passing areas with different area types and the boundary of which is not provided with the abnormal edge; training an initial neural network model by using the sample image information to obtain the target neural network model, wherein the target neural network model has the following conditions: the error between a first estimated recognition result and a first real recognition result meets a preset first convergence condition, the first estimated recognition result is used for indicating the estimated abnormal edge information in the group of sample image information, the first real recognition result is used for indicating the predetermined abnormal edge information in the group of sample image information, and the abnormal edge information comprises quantity information and position information; the first estimated recognition result and the first real recognition result are used for indicating whether the abnormal edge exists in the sample image information or not and the position of the abnormal edge.
Optionally, the target neural network model further has the following conditions: and the error between a second estimated recognition result and a second real recognition result meets a preset second convergence condition, wherein the second estimated recognition result is used for indicating the estimated regional positions of a plurality of adjacent passing regions in the first type outdoor driving scene and the second type outdoor driving scene, and the second real recognition result is used for indicating the predetermined regional positions of a plurality of adjacent passing regions in the first type outdoor driving scene and the second type outdoor driving scene.
According to another embodiment of the present invention, there is provided an identification device of an area, including: the first acquisition module is used for acquiring image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of the target movable equipment; the first determining module is used for obtaining a region identification result through the image information to be identified based on the target neural network model; a first setting module, configured to set the target mobile device to not allow switching running between the first area and the second area when the area identification result indicates that the outdoor running scene includes a first area of a first area type and a second area of a second area type, and an abnormal edge exists at a boundary between the first area and the second area, where the first area of the first area type and the second area of the second area type are both traffic areas allowing running of the target mobile device.
According to a further embodiment of the invention, there is also provided a storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the method and the device, the image information to be identified is acquired, wherein the image information to be identified comprises the outdoor driving scene of the target movable equipment; obtaining a region identification result through the image information to be identified based on the target neural network model; and setting the target movable equipment to not allow switching running between the first area and the second area under the condition that the area identification result indicates that the first area and the second area of the first area type and the second area of the second area type are included in the outdoor running scene and an abnormal edge exists at the junction of the first area and the second area, wherein the first area and the second area of the first area type are both passing areas allowing the target movable equipment to run. Therefore, the problem that the vehicle cannot adapt to different driving scenes in the related art can be solved, and the effect that the vehicle can adapt to different scenes is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a region identification method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of identifying regions according to an embodiment of the invention;
FIG. 3 is a schematic view of a scenario according to an embodiment of the present invention;
fig. 4 is a block diagram of a structure of an area identifying apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiment provided in the first embodiment of the present application may be executed in a mobile terminal, a computer terminal or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to an area identification method in an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a region identification method in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
Alternatively, a depth camera or a laser radar may be provided in the target mobile device to provide necessary information for outdoor navigation and path planning, but the target mobile device is often configured as a general imaging device incapable of providing image depth information due to the high cost of the depth camera or the laser radar.
However, the target mobile device cannot obtain more external environment information based on the image information provided by the common imaging device, and the driving safety and decision efficiency of the target mobile device cannot be guaranteed.
Based on the above-mentioned problems still existing, in this embodiment, there is provided a method for identifying an area, and fig. 2 is a flowchart of a method for identifying an area according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the steps of:
step S202, obtaining image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of a target movable device;
step S204, obtaining a region identification result through the image information to be identified based on a target neural network model;
in step S206, when the area identification result indicates that the outdoor driving scene includes a first area of the first area type and a second area of the second area type, and an abnormal edge exists at the boundary between the first area and the second area, the target movable device is set to not allow the switching driving between the first area and the second area, wherein the first area of the first area type and the second area of the second area type are both traffic areas allowing the target movable device to drive.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
Alternatively, the present embodiment may be applied to a scene where a vehicle travels, where the traffic areas of different area types may be sidewalks, highways, or the like.
Alternatively, the abnormal edge may be an edge that is not traversed by the target mobile device, including but not limited to a trench, pipeline, roadblock, fence, greenbelt vegetation, and the like. Target mobile devices include, but are not limited to, robots, vehicles, bicycles, and the like.
Alternatively, for example, when multiple available areas are simultaneously present in an outdoor scene, such as a pavement on both sides of a highway, the vehicle may enter the pavement from the highway at this time, i.e., across the abnormal edges between the highway and the pavement.
In an alternative embodiment, after obtaining the region identification result by the image information to be identified based on the target neural network model, the method further includes:
s1, setting the target movable equipment to allow switching running between the first area and the second area when the area identification result indicates that the first area of the first area type and the second area of the second area type are included in the outdoor running scene and no abnormal edge exists at the junction of the first area and the second area.
Alternatively, in the present embodiment, in a scene where the target movable apparatus travels on the road, there is no abnormal curb between the bus lane and the ordinary travel lane, and in an emergency, switching may be performed.
According to the method, the switching running of the target movable equipment is planned by identifying the abnormal condition of the junction of the first area and the second area.
In an alternative embodiment, the target mobile device is configured to disallow switching between the first area and the second area, comprising:
s1, in the case that the target movable device is currently driven on the first area, the target movable device is set to be not allowed to be switched from the first area to the second area for driving.
Alternatively, in the present embodiment, for example, the target movable apparatus is not allowed to switch from on the highway to the emergency lane during traveling on the highway.
By limiting the driving area of the target movable equipment, the safe driving of the target movable equipment is facilitated.
In an alternative embodiment, the target mobile device is configured to allow switching travel between the first area and the second area, comprising:
s1, setting the target movable equipment to allow switching from the first area to the second area to run under the condition that the target movable equipment is currently running on the first area;
after setting the target mobile device to allow switching from the first area to the second area for traveling, the method further comprises:
s2, acquiring a switching instruction; and controlling the target movable device to switch from the first area to the second area to run in response to the switching instruction.
Alternatively, the switching instruction may be voice or text. For example, if an emergency vehicle, such as an ambulance, follows the target mobile device during road travel, the target mobile device may be instructed to switch from on the road to on another lane, such as an emergency lane or a sidewalk.
By controlling the target movable device through the switching instruction, the safe running of the target movable device can be ensured.
In an alternative embodiment, acquiring the image information to be identified includes:
s1, acquiring image information to be identified, which is obtained by shooting an outdoor driving scene by using camera equipment; the target neural network model is used for marking passing areas of different area types by using at least different colors; wherein, in the case that the region recognition result indicates a first region including a first region type and a second region including a second region type in the outdoor driving scene, and an abnormal edge exists at a boundary of the first region and the second region, the region recognition result includes a first color mark for indicating the first region, a second color mark for indicating the second region, and a third color mark for indicating the abnormal edge.
Optionally, the image capturing apparatus includes, but is not limited to, a depth camera, a general camera, a lidar apparatus, and the like. Or a combination of the individual devices. The common camera is preferable, and the cost can be saved.
Alternatively, in the present embodiment, (a) is an actual scene in the present embodiment, and (b) is a schematic diagram of color recognition in the present embodiment, as shown in fig. 3.
According to the embodiment, the passing areas with different area types are marked by using different colors through the target neural network model, so that the accuracy of identifying the different areas can be improved.
In an alternative embodiment, before acquiring the image information to be identified, the method further comprises:
s1, acquiring a group of sample image information, wherein the group of sample image information comprises first sample image information and second sample image information, the image information in the first sample image information is used for representing a first type outdoor driving scene, the first type outdoor driving scene comprises a plurality of adjacent passing areas with different area types and abnormal edges at the junction, the image information in the second sample image information is used for representing a second type outdoor driving scene, and the second type outdoor driving scene comprises a plurality of adjacent passing areas with different area types and no abnormal edges at the junction;
s2, training an initial neural network model by using a group of sample image information to obtain a target neural network model, wherein the target neural network model has the following conditions: the error between the first estimated recognition result and the first real recognition result meets a preset first convergence condition, the first estimated recognition result is used for indicating abnormal edge information in a group of estimated sample image information, the first real recognition result is used for indicating the preset abnormal edge information in the group of sample image information, and the abnormal edge information comprises quantity information and position information; the first estimated recognition result and the first real recognition result are used for indicating whether an abnormal edge exists in the sample image information and the position of the abnormal edge.
Alternatively, in the present embodiment, a set of sample image information may be acquired by the image capturing apparatus.
According to the embodiment, the target neural network is trained through the unused sample image information, so that the running accuracy of the target movable equipment can be improved.
In an alternative embodiment, the target neural network model further has the following conditions: and the error between the second estimated recognition result and the second real recognition result meets a preset second convergence condition, wherein the second estimated recognition result is used for indicating the estimated regional positions of a plurality of adjacent traffic regions in the first type of outdoor driving scene and the second type of outdoor driving scene, and the second real recognition result is used for indicating the predetermined regional positions of a plurality of adjacent traffic regions in the first type of outdoor driving scene and the second type of outdoor driving scene.
Alternatively, in the present embodiment, the second convergence condition may be the number of iterations.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The present embodiment also provides a device for identifying an area, which is used to implement the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 4 is a block diagram of a structure of an area identifying apparatus according to an embodiment of the present invention, as shown in fig. 4, the apparatus includes:
a first obtaining module 42, configured to obtain image information to be identified, where the image information to be identified includes an outdoor driving scene of the target mobile device;
a first determining module 44, configured to obtain a region identification result through the image information to be identified based on a target neural network model;
the first setting module 46 is configured to set the target mobile device to not allow switching between the first area and the second area, where the area recognition result indicates that the outdoor driving scene includes a first area of the first area type and a second area of the second area type, and an abnormal edge exists at a boundary between the first area and the second area, and the first area and the second area of the second area type are both traffic areas allowing the target mobile device to drive.
Optionally, the apparatus further includes:
and a second setting module configured to set the target movable device to allow switching traveling between the first area and the second area in a case where the area recognition result indicates that the first area of the first area type and the second area of the second area type are included in the outdoor traveling scene and there is no abnormal edge at a boundary of the first area and the second area after the area recognition result is obtained by the image information to be recognized based on the target neural network model.
Optionally, the second setting module includes:
and a first setting unit configured to set the target movable apparatus not to allow switching from the first area to the second area for traveling in a case where the target movable apparatus is currently traveling on the first area.
Optionally, the apparatus further includes: a third setting module for setting the target movable device to allow switching running between the first area and the second area: setting the target mobile device to allow switching from the first area to the second area for traveling in a case where the target mobile device is currently traveling on the first area;
a second acquisition module configured to acquire a switching instruction after setting the target movable apparatus to allow switching from the first area to the second area for traveling; and controlling the target movable device to switch from the first area to the second area to run in response to the switching instruction.
Optionally, the first acquisition module includes:
the first acquisition unit is used for acquiring image information to be identified, which is obtained by shooting an outdoor driving scene by the camera equipment; the target neural network model is used for marking passing areas of different area types by using at least different colors; wherein, in the case that the region recognition result indicates a first region including a first region type and a second region including a second region type in the outdoor driving scene, and an abnormal edge exists at a boundary of the first region and the second region, the region recognition result includes a first color mark for indicating the first region, a second color mark for indicating the second region, and a third color mark for indicating the abnormal edge.
Optionally, the apparatus further includes:
the third acquisition module is used for acquiring a group of sample image information before acquiring the image information to be identified, wherein the group of sample image information comprises first sample image information and second sample image information, the image information in the first sample image information is used for representing a first type outdoor driving scene, the first type outdoor driving scene comprises a plurality of adjacent passing areas with different area types and abnormal edges at the junction, the image information in the second sample image information is used for representing a second type outdoor driving scene, and the second type outdoor driving scene comprises a plurality of adjacent passing areas with different area types and no abnormal edges at the junction;
the second determining module is used for training the initial neural network model by using a group of sample image information to obtain a target neural network model, wherein the target neural network model has the following conditions: the error between the first estimated recognition result and the first real recognition result meets a preset first convergence condition, the first estimated recognition result is used for indicating abnormal edge information in a group of estimated sample image information, the first real recognition result is used for indicating the preset abnormal edge information in the group of sample image information, and the abnormal edge information comprises quantity information and position information; the first estimated recognition result and the first real recognition result are used for indicating whether an abnormal edge exists in the sample image information and the position of the abnormal edge.
Optionally, the target neural network model further has the following conditions: and the error between the second estimated recognition result and the second real recognition result meets a preset second convergence condition, wherein the second estimated recognition result is used for indicating the estimated regional positions of a plurality of adjacent traffic regions in the first type of outdoor driving scene and the second type of outdoor driving scene, and the second real recognition result is used for indicating the predetermined regional positions of a plurality of adjacent traffic regions in the first type of outdoor driving scene and the second type of outdoor driving scene.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, acquiring image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of target movable equipment;
s2, obtaining a region identification result through the image information to be identified based on a target neural network model;
s3, setting the target movable equipment to be not allowed to switch between the first area and the second area to run when the area identification result indicates that the outdoor running scene comprises the first area of the first area type and the second area of the second area type and an abnormal edge exists at the junction of the first area and the second area, wherein the first area of the first area type and the second area of the second area type are both traffic areas allowing the target movable equipment to run.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of target movable equipment;
s2, obtaining a region identification result through the image information to be identified based on a target neural network model;
s3, setting the target movable equipment to be not allowed to switch between the first area and the second area to run when the area identification result indicates that the outdoor running scene comprises the first area of the first area type and the second area of the second area type and an abnormal edge exists at the junction of the first area and the second area, wherein the first area of the first area type and the second area of the second area type are both traffic areas allowing the target movable equipment to run.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for identifying an area, comprising:
acquiring image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of target movable equipment;
obtaining a region identification result through the image information to be identified based on a target neural network model;
and setting the target movable device to not allow switching running between the first area and the second area when the area identification result indicates that the first area and the second area of the first area type are included in the outdoor running scene and an abnormal edge exists at the junction of the first area and the second area, and setting the target movable device to allow switching running between the first area and the second area when the area identification result indicates that the first area and the second area of the first area type are included in the outdoor running scene and an abnormal edge does not exist at the junction of the first area and the second area, wherein the first area of the first area type and the second area of the second area type are all traffic areas allowing the target movable device to run.
2. The method of claim 1, wherein the setting the target mobile device to disallow switching travel between the first region and the second region comprises:
the target mobile device is set to disallow switching from the first area to the second area for traveling in a case where the target mobile device is currently traveling on the first area.
3. The method of claim 1, wherein the step of determining the position of the probe comprises,
the setting the target movable device to allow switching travel between the first area and the second area includes: setting the target mobile device to allow switching from the first area to travel on the second area, in a case where the target mobile device is currently traveling on the first area;
after setting the target mobile device to allow travel over the second area with switching from the first area, the method further comprises: acquiring a switching instruction; and responding to the switching instruction to control the target movable equipment to switch from the first area to the second area for running.
4. The method of claim 1, wherein the acquiring image information to be identified comprises:
acquiring the image information to be identified, which is obtained by shooting the outdoor driving scene by the camera equipment; the target neural network model is used for marking passing areas of different area types by using at least different colors;
in the case that the region identification result indicates that the first region of the first region type and the second region of the second region type are included in the outdoor driving scene, and an abnormal edge exists at a junction of the first region and the second region, the region identification result includes a first color mark for indicating the first region, a second color mark for indicating the second region, and a third color mark for indicating the abnormal edge.
5. The method according to any one of claims 1 to 4, wherein before the acquiring the image information to be identified, the method further comprises:
acquiring a group of sample image information, wherein the group of sample image information comprises first sample image information and second sample image information, the image information in the first sample image information is used for representing a first type of outdoor driving scene, the first type of outdoor driving scene comprises a plurality of adjacent traffic areas with different area types and abnormal edges at the junction, the image information in the second sample image information is used for representing a second type of outdoor driving scene, and the second type of outdoor driving scene comprises a plurality of adjacent traffic areas with different area types and no abnormal edges at the junction;
training an initial neural network model by using the sample image information to obtain the target neural network model, wherein the target neural network model has the following conditions:
the error between a first estimated recognition result and a first real recognition result meets a preset first convergence condition, the first estimated recognition result is used for indicating the estimated abnormal edge information in the group of sample image information, the first real recognition result is used for indicating the predetermined abnormal edge information in the group of sample image information, and the abnormal edge information comprises quantity information and position information;
the first estimated recognition result and the first real recognition result are used for indicating whether the abnormal edge exists in the sample image information or not and the position of the abnormal edge.
6. The method of claim 5, wherein,
the target neural network model also has the following conditions:
and the error between a second estimated recognition result and a second real recognition result meets a preset second convergence condition, wherein the second estimated recognition result is used for indicating the estimated regional positions of a plurality of adjacent passing regions in the first type of outdoor driving scene and the second type of outdoor driving scene, and the second real recognition result is used for indicating the predetermined regional positions of a plurality of adjacent passing regions in the first type of outdoor driving scene and the second type of outdoor driving scene.
7. An apparatus for identifying an area, comprising:
the first acquisition module is used for acquiring image information to be identified, wherein the image information to be identified comprises an outdoor driving scene of the target movable equipment;
the first determining module is used for obtaining a region identification result through the image information to be identified based on a target neural network model;
a first setting module, configured to set, when the area identification result indicates that a first area of a first area type and a second area of a second area type are included in the outdoor driving scene, and an abnormal edge exists at a junction of the first area and the second area, the target movable device to not allow switching driving between the first area and the second area, and when the area identification result indicates that a second area of a first area type and a second area type are included in the outdoor driving scene, and an abnormal edge does not exist at a junction of the first area and the second area, set the target movable device to allow switching driving between the first area and the second area, wherein the first area of the first area type and the second area of the second area type are both traffic areas allowing the target movable device to drive.
8. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 6 when run.
9. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 6.
CN201910755305.6A 2019-08-15 2019-08-15 Region identification method and device, storage medium and electronic device Active CN112395917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910755305.6A CN112395917B (en) 2019-08-15 2019-08-15 Region identification method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910755305.6A CN112395917B (en) 2019-08-15 2019-08-15 Region identification method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN112395917A CN112395917A (en) 2021-02-23
CN112395917B true CN112395917B (en) 2024-04-12

Family

ID=74601735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910755305.6A Active CN112395917B (en) 2019-08-15 2019-08-15 Region identification method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112395917B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014040559A1 (en) * 2012-09-14 2014-03-20 华为技术有限公司 Scene recognition method and device
CN106650705A (en) * 2017-01-17 2017-05-10 深圳地平线机器人科技有限公司 Region labeling method and device, as well as electronic equipment
CN109117691A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN109543647A (en) * 2018-11-30 2019-03-29 国信优易数据有限公司 A kind of road abnormality recognition method, device, equipment and medium
CN109766797A (en) * 2018-12-27 2019-05-17 秒针信息技术有限公司 The detection method and device of the access entitlements of scene
CN110103820A (en) * 2019-04-24 2019-08-09 深圳市轱辘汽车维修技术有限公司 The method, apparatus and terminal device of the abnormal behaviour of personnel in a kind of detection vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014040559A1 (en) * 2012-09-14 2014-03-20 华为技术有限公司 Scene recognition method and device
CN106650705A (en) * 2017-01-17 2017-05-10 深圳地平线机器人科技有限公司 Region labeling method and device, as well as electronic equipment
CN109117691A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN109543647A (en) * 2018-11-30 2019-03-29 国信优易数据有限公司 A kind of road abnormality recognition method, device, equipment and medium
CN109766797A (en) * 2018-12-27 2019-05-17 秒针信息技术有限公司 The detection method and device of the access entitlements of scene
CN110103820A (en) * 2019-04-24 2019-08-09 深圳市轱辘汽车维修技术有限公司 The method, apparatus and terminal device of the abnormal behaviour of personnel in a kind of detection vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种阴影区域的可通行性检测方法;高华;赵春霞;张浩峰;;计算机研究与发展(11);全文 *

Also Published As

Publication number Publication date
CN112395917A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
US10048691B2 (en) Distinguishing lane markings for a vehicle to follow
US20190164007A1 (en) Human driving behavior modeling system using machine learning
DE102016224604A1 (en) System and method for verifying map data for a vehicle
CN107063713A (en) Method of testing and device applied to pilotless automobile
US20160202077A1 (en) Warning sign placing apparatus and control method thereof
CN109477725A (en) For generating the method and system of the cartographic information in emergency region
CN107454945B (en) Unmanned aerial vehicle's navigation
CN110335484B (en) Method and device for controlling vehicle to run
CN109584579B (en) Traffic signal lamp control method based on face recognition and computer equipment
KR20200036544A (en) A system for improving work environment of construction site and providing method thereof
DE112021005624T5 (en) Substitute data for autonomous vehicles
CN111026136A (en) Port unmanned sweeper intelligent scheduling method and device based on monitoring equipment
US20230415762A1 (en) Peer-to-peer occupancy estimation
CN113110462A (en) Obstacle information processing method and device and operating equipment
CN112395917B (en) Region identification method and device, storage medium and electronic device
CN112863195B (en) Vehicle state determination method and device
CN114333386A (en) Navigation information pushing method and device and storage medium
CN112509353B (en) Robot passing method and device, robot and storage medium
CN112396051B (en) Determination method and device for passable area, storage medium and electronic device
CN115376356B (en) Parking space management method, system, electronic equipment and nonvolatile storage medium
CN112396630A (en) Method and device for determining state of target object, storage medium and electronic device
CN116430404A (en) Method and device for determining relative position, storage medium and electronic device
CN113763704A (en) Vehicle control method, device, computer readable storage medium and processor
KR20230078464A (en) Drone for detecting traffic violation cars and method for the same
CN113091761A (en) Vehicle route planning method, device and system and non-volatile storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant