CN115376092A - Image recognition method and device in auxiliary driving - Google Patents

Image recognition method and device in auxiliary driving Download PDF

Info

Publication number
CN115376092A
CN115376092A CN202211291258.2A CN202211291258A CN115376092A CN 115376092 A CN115376092 A CN 115376092A CN 202211291258 A CN202211291258 A CN 202211291258A CN 115376092 A CN115376092 A CN 115376092A
Authority
CN
China
Prior art keywords
image
parameters
scene
target
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211291258.2A
Other languages
Chinese (zh)
Other versions
CN115376092B (en
Inventor
董文强
王亮
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Wise Security Technology Co Ltd
Original Assignee
Guangzhou Wise Security Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Wise Security Technology Co Ltd filed Critical Guangzhou Wise Security Technology Co Ltd
Priority to CN202211291258.2A priority Critical patent/CN115376092B/en
Publication of CN115376092A publication Critical patent/CN115376092A/en
Application granted granted Critical
Publication of CN115376092B publication Critical patent/CN115376092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/20Static objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/406Traffic density

Abstract

The embodiment of the invention discloses an image recognition method and device in auxiliary driving, wherein the method comprises the following steps: in the auxiliary driving process, when an image collected by a camera is identified, image scene detection of the image and target object identification of the image are carried out in real time; acquiring corresponding environmental parameters based on the currently detected image scene and image target parameters obtained according to the result of target object identification; and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control. According to the scheme, the accuracy of image recognition is improved, the corresponding recognition modes are adapted in different scenes, and the recognition efficiency is improved.

Description

Image recognition method and device in auxiliary driving
Technical Field
The embodiment of the application relates to the technical field of assistant driving, in particular to an image recognition method and device in assistant driving.
Background
With the popularization of vehicles and the wide application of intelligent driving, the vehicle is controlled to realize auxiliary driving, a driver can be assisted to more efficiently control the vehicle, and the vehicle can be safely and reliably driven.
In the existing auxiliary driving process, a set of fixed equipment parameters is adopted for recognizing an object in front of a vehicle, so that the recognition accuracy is low, and target recognition under different equipment parameters can not be performed according to the current actual scene condition.
Disclosure of Invention
The embodiment of the invention provides an image recognition method and device in assistant driving, which solves the problems of low target object recognition efficiency and inaccurate recognition in the assistant driving process in the related technology, improves the accuracy of image recognition while ensuring the algorithm efficiency, adapts corresponding recognition modes in different scenes, and improves the recognition efficiency.
In a first aspect, an embodiment of the present invention provides an image recognition method in driving assistance, including:
in the auxiliary driving process, when an image collected by a camera is identified, image scene detection of the image and target object identification of the image are carried out in real time;
acquiring corresponding environmental parameters based on the currently detected image scene and image target parameters obtained according to the result of target object identification;
and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
Optionally, the performing, in real time, image scene detection of the image includes:
the image scene detection of the image is carried out through a scene detection model obtained through pre-training, in the scene detection model training process, the image scene detection is obtained through training based on labeled sample images of different scenes, and the scenes comprise one or more of an expressway scene, an urban road scene, a night scene, a tunnel scene, a daytime scene, a congested scene and an idle scene.
Optionally, the obtaining of the corresponding environmental parameter based on the currently detected image scene includes:
respectively determining the weight of each scene and a corresponding preset environment parameter value based on the currently detected image scene;
and calculating to obtain an environmental parameter value corresponding to the image acquired by the camera based on the weight and the preset environmental parameter value.
Optionally, the image target parameters obtained according to the result of target object recognition include:
and determining the target type, the target number and the image proportion of the identified target object as image target parameters.
Optionally, the determining a device adjustment parameter based on the environment parameter and the image target parameter includes:
determining equipment precision setting parameters of different preset gears based on the environment parameters;
and determining corresponding equipment processing algorithm parameters based on the image target parameters.
Optionally, determining the device accuracy setting parameters of different preset gears based on the environmental parameters includes:
correspondingly setting the precision parameters of the lighting equipment with different preset gears under the condition that the environment parameters are brightness parameters;
and correspondingly setting the camera resolution parameters of different preset gears under the condition that the environment parameters are image detail parameters.
Optionally, the determining a corresponding device processing algorithm parameter based on the image target parameter includes:
determining the complexity of the target object according to the target type and the target number of the target object;
determining algorithm complexity by taking the image proportion as weight according to the target object complexity;
and determining corresponding equipment processing algorithm parameters according to the algorithm complexity.
In a second aspect, an embodiment of the present invention further provides an image recognition apparatus in driving assistance, including:
the image processing module is configured to detect an image scene of an image and identify a target object of the image in real time when the image acquired by the camera is identified in the driving assistance process;
the parameter determination module is configured to acquire corresponding environmental parameters based on a currently detected image scene and image target parameters obtained according to a target object identification result;
and the equipment adjusting module is configured to determine equipment adjusting parameters based on the environmental parameters and the image target parameters, adjust corresponding equipment parameters based on the equipment adjusting parameters, and recognize the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
In a third aspect, an embodiment of the present invention also provides an image recognition apparatus in driving assistance, including:
one or more processors;
an image recognition device in storage-assisted driving for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the image recognition method in driving assistance according to the embodiment of the present invention.
In a fourth aspect, the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the image recognition method in driving assistance according to the present invention.
In a fifth aspect, the present application further provides a computer program product, where the computer program product includes a computer program, the computer program is stored in a computer-readable storage medium, and at least one processor of the device reads from the computer-readable storage medium and executes the computer program, so that the device executes the image recognition method in assisted driving according to the present application.
In the embodiment of the invention, in the process of driving assistance, when the image collected by the camera is identified, the image scene detection of the image and the target object identification of the image are carried out in real time; acquiring corresponding environmental parameters based on a currently detected image scene and image target parameters obtained according to a target object identification result; and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control. According to the scheme, the problems that in the related technology, the target object identification efficiency is low in the auxiliary driving process, and the identification is inaccurate possibly are solved, the accuracy of image identification is improved while the algorithm efficiency is ensured, and the corresponding identification modes are adapted in different scenes, so that the identification efficiency is improved.
Drawings
Fig. 1 is a flowchart of an image recognition method in driving assistance according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for determining environmental parameters according to an embodiment of the present invention;
FIG. 3 is a flowchart of another method for image recognition in driving assistance according to an embodiment of the present invention;
fig. 4 is a block diagram of an image recognition apparatus for assisting driving according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image recognition device for assisting driving according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad invention. It should be further noted that, for convenience of description, only some structures, not all structures, relating to the embodiments of the present invention are shown in the drawings.
Fig. 1 is a flowchart of an image recognition method in driving assistance according to an embodiment of the present invention, where an embodiment of the present application specifically includes the following steps:
step S101, in the process of driving assistance, when the image collected by the camera is identified, image scene detection of the image and target object identification of the image are carried out in real time.
In one embodiment, when the vehicle runs in the auxiliary driving mode, images around the vehicle are collected through a set camera, and corresponding vehicle control is executed according to the identification result through the identification of the images so as to realize the auxiliary driving function. Such as vehicle control for avoiding a vehicle ahead, braking, accelerating, decelerating, turning, etc.
In one embodiment, when the image collected by the camera is identified, the image scene detection of the image and the target object identification of the image are carried out in real time. The scene detection is used for determining the scene of the currently acquired image, and the target object identification is used for determining the target object contained in the image.
In one embodiment, the process of performing image scene detection may be: the image scene detection of the image is carried out through a scene detection model obtained through pre-training, in the scene detection model training process, the image scene detection is obtained through training based on labeled sample images of different scenes, and the scenes comprise one or more of an expressway scene, an urban road scene, a night scene, a tunnel scene, a daytime scene, a congested scene and an idle scene. Thus, the corresponding scene is determined based on the current image.
In one embodiment, the target object recognition process may use a target recognition model to output a detection result of the target object after the image is input. Such as the target type, the number of targets, and the image proportion of the output target object. Namely, the target type, the target number and the image proportion of the identified target object are determined as image target parameters. For example, the identified target object is a high-speed green belt, the target type is a plant, the target number is the number of the green belts, and the image occupation ratio is the area proportion of the green belts in the current image.
Step S102, acquiring corresponding environment parameters based on the currently detected image scene, and acquiring image target parameters according to the result of target object identification.
In one embodiment, after the image scene and the target object recognition result are obtained, the corresponding environment parameter is correspondingly obtained based on the currently detected image scene, and the image target parameter is obtained according to the target object recognition result. Specifically, fig. 2 is a flowchart of a method for determining an environmental parameter according to an embodiment of the present invention, as shown in fig. 2, which specifically includes:
step S1011, determining the weight of each scene and the corresponding preset environmental parameter value based on the currently detected image scene.
In one embodiment, the image may be detected simultaneously for multiple scenes, i.e., the image may be characterized by multiple set scenes. Such as a combination of nighttime scenes and idle scenes; a combination of a daytime scene, an urban road scene, and a congestion scene, etc. And respectively determining the weight of each scene and the corresponding preset environment parameter value based on the determined image scenes. The determination of the weight value is a fixed value for some descriptive scenes, such as daytime scenes, nighttime scenes, urban road scenes and the like; for some scenes, such as an idle scene and a congested scene, which have differences according to different weight values of image contents, the scenes correspond to different weight values correspondingly based on respective idle degrees and congestion degrees of different idle scenes and different congested scenes. For example, the weight of the fixed value may be set to 5, the idle degree in the idle scene corresponds to the weight 1 value 10 from low to high, and the congestion degree in the congestion scene corresponds to the weight 1 value 10 from low to high, and the specific idle degree and congestion degree may be determined by image recognition. A fixed preset environment parameter value is set for each scene, and illustratively, the preset environment parameter value may exist in a score form, for example, an idle scene corresponds to 10 scores, and a congested scene corresponds to 50 scores; the daytime scene corresponds to 30 points and the nighttime scene corresponds to 50 points.
And step S1012, calculating an environmental parameter value corresponding to the image acquired by the camera based on the weight and the preset environmental parameter value.
In one embodiment, after the weight of each scene and the corresponding preset environment parameter value are respectively determined, the environment parameter value corresponding to the image acquired by the camera is calculated and obtained based on the weight and the preset environment parameter value. Specifically, the calculation process may be to multiply the obtained preset parameter value by the weight of the scene and then accumulate the result to obtain a final score value, which is used as an environmental parameter value corresponding to the image acquired by the camera.
Step S103, determining equipment adjusting parameters based on the environment parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
In one embodiment, after the environmental parameter and the image target parameter are obtained, an equipment adjusting parameter is determined based on the environmental parameter and the image target parameter, and corresponding equipment parameters are adjusted based on the equipment adjusting parameter, so that an assistant driving control instruction is obtained by identifying through an algorithm corresponding to the equipment adjusting parameter according to the adjusted acquired image, and assistant driving control is performed. The device adjusting parameters optionally include device precision setting parameters and device processing algorithm parameters. The equipment precision setting parameter is used for representing the precision of the acquisition information corresponding to the equipment needing to be adjusted, and exemplarily, the equipment precision setting parameter can be a lighting equipment precision parameter, a shooting resolution parameter and the like. The device processing algorithm parameter represents a processing algorithm carried out on the acquired information, after the acquired information is acquired under the device precision parameter, the acquired information meeting the identification requirement is identified by using a corresponding processing algorithm based on the device processing algorithm parameter, and meanwhile, the calculation complexity and the identification result are more efficient and reliable due to the use of the targeted processing algorithm.
According to the scheme, in the process of driving assistance, when the image collected by the camera is identified, the image scene detection of the image and the target object identification of the image are carried out in real time; acquiring corresponding environmental parameters based on the currently detected image scene and image target parameters obtained according to the result of target object identification; and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control. According to the scheme, the problems that in the related technology, the target object identification efficiency is low in the auxiliary driving process, and the identification is inaccurate possibly are solved, the accuracy of image identification is improved while the algorithm efficiency is ensured, and the corresponding identification modes are adapted in different scenes, so that the identification efficiency is improved.
Fig. 3 is a flowchart of another image recognition method for assisting driving according to an embodiment of the present invention, and shows a specific process for determining a device adjustment parameter, as shown in fig. 3, specifically including:
step S201, in the process of driving assistance, when an image collected by a camera is identified, image scene detection of the image and target object identification of the image are carried out through a scene detection model obtained through pre-training.
Step S202, respectively determining the weight of each scene and a corresponding preset environment parameter value based on the currently detected image scene, calculating the environment parameter value corresponding to the image acquired by the camera based on the weight and the preset environment parameter value, and determining the target type, the target number and the image ratio of the identified target object as the image target parameters.
And S203, determining equipment precision setting parameters of different preset gears based on the environment parameters, and determining corresponding equipment processing algorithm parameters based on the image target parameters.
In one embodiment, the image resolution and the light intensity are taken as examples, and the environmental parameter is taken as an example of an environmental parameter value obtained by calculation. For example, when the environmental parameter value is greater than a, the image is collected by adopting high-resolution and high-intensity lighting, and when the environmental parameter value is not greater than b, the image is collected by adopting low-resolution. When the environmental parameter value is between the a value and the b value, the image acquisition is carried out by adopting the lamp light with medium resolution and medium intensity. Wherein the value a is greater than b. It should be noted that, the image resolution and the light intensity are taken as examples, and other device precision setting parameters may also be used, in the above-mentioned gear setting, the image resolution is set to be three gears, and the light setting is two gears (in the case of turning on the light), the setting and division of the gear may be further refined, and the division of a few preset gears is performed for convenience of explanation.
In another embodiment, the environment parameter can be subdivided according to different devices, so as to further improve the adjustment precision. For example, the environment parameter may be a brightness parameter value obtained by calculation and an image detail parameter value, in the process of determining the device precision setting parameter, the lighting device precision parameters in different preset gears are correspondingly set under the condition that the environment parameter is the brightness parameter, and the shooting resolution parameters in different preset gears are correspondingly set under the condition that the environment parameter is the image detail parameter.
In one embodiment, the process of device handling algorithm parameter determination may be: determining the complexity of the target object according to the target type and the target number of the target object, determining the algorithm complexity according to the complexity of the target object by taking the image occupation ratio as weight, and determining the corresponding equipment processing algorithm parameter according to the algorithm complexity. Wherein different target object types and target quantities correspond to different target object complexities. For example, if the target object is a green belt, the type is a plant and the number is one or two, the complexity of the corresponding target object is relatively low, and if the target object is a pedestrian and the number is greater than 5, the complexity of the corresponding target object is high. When determining the complexity of the target object, the greater the number is, the higher the complexity is, the complexity corresponding to the type may be distinguished by setting the higher the complexity corresponding to the type is, and the lower the complexity corresponding to the type is, and the final complexity of the target object may be obtained by using a weighted average of the two in the calculation. The method comprises the steps that generally, one image comprises a plurality of target objects, when complexity is finally determined, the final algorithm complexity is obtained through final weighted average calculation by taking the complexity of the target object determined by each target object and the proportion of the target object in the image as weights, and corresponding equipment processing algorithm parameters are determined based on the algorithm complexity, wherein mapping tables are preset for different algorithm complexities, and the equipment processing algorithm parameters used corresponding to the algorithm complexity are recorded. Optionally, the parameters of the processing algorithm for the device may correspond to a unique processing algorithm, or may correspond to multiple processing algorithms for optional use.
And S204, adjusting corresponding equipment parameters based on the equipment precision setting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjustment parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
In the process of driving assistance, when the image collected by the camera is identified, the image scene detection of the image and the target object identification of the image are carried out in real time; acquiring corresponding environmental parameters based on the currently detected image scene and image target parameters obtained according to the result of target object identification; and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control. According to the scheme, the problems that in the related technology, the target object identification efficiency is low in the auxiliary driving process, and the identification is inaccurate in possible situation are solved, the accuracy of image identification is improved while the algorithm efficiency is ensured, and the identification efficiency is improved by adapting the corresponding identification modes in different scenes.
Fig. 4 is a structural block diagram of an image recognition device for driving assistance according to an embodiment of the present invention, where the image recognition device for driving assistance is configured to execute the image recognition method for driving assistance according to the embodiment of the data receiving end, and has functional modules and beneficial effects corresponding to the execution method. As shown in fig. 4, the image recognition apparatus for driving support specifically includes: an image processing module 101, a parameter determination module 102, and a device adjustment module 103, wherein,
the image processing module 101 is configured to detect an image scene of an image and identify a target object of the image in real time when the image acquired by a camera is identified in the driving assistance process;
a parameter determining module 102 configured to obtain a corresponding environmental parameter based on a currently detected image scene and an image target parameter according to a result of target object recognition;
and the equipment adjusting module 103 is configured to determine equipment adjusting parameters based on the environmental parameters and the image target parameters, adjust corresponding equipment parameters based on the equipment adjusting parameters, and recognize the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
According to the scheme, in the process of driving assistance, when the image collected by the camera is identified, the image scene detection of the image and the target object identification of the image are carried out in real time; acquiring corresponding environmental parameters based on a currently detected image scene and image target parameters obtained according to a target object identification result; and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control. According to the scheme, the problems that in the related technology, the target object identification efficiency is low in the auxiliary driving process, and the identification is inaccurate possibly are solved, the accuracy of image identification is improved while the algorithm efficiency is ensured, and the corresponding identification modes are adapted in different scenes, so that the identification efficiency is improved. Correspondingly, the functions executed by the modules are respectively as follows:
in one possible embodiment, the performing, in real time, image scene detection of the image includes:
the image scene detection of the image is carried out through a scene detection model obtained through pre-training, in the scene detection model training process, the image scene detection is obtained through training based on labeled sample images of different scenes, and the scenes comprise one or more of an expressway scene, an urban road scene, a night scene, a tunnel scene, a daytime scene, a congested scene and an idle scene.
In one possible embodiment, the acquiring the corresponding environmental parameter based on the currently detected image scene includes:
respectively determining the weight of each scene and a corresponding preset environment parameter value based on the currently detected image scene;
and calculating to obtain an environmental parameter value corresponding to the image acquired by the camera based on the weight and the preset environmental parameter value.
In a possible embodiment, the image target parameters obtained according to the result of target object recognition include:
and determining the target type, the target number and the image proportion of the identified target object as image target parameters.
In one possible embodiment, the determining device adjustment parameters based on the environmental parameters and the image target parameters comprises:
determining equipment precision setting parameters of different preset gears based on the environment parameters;
and determining corresponding equipment processing algorithm parameters based on the image target parameters.
In a possible embodiment, the determining the device accuracy setting parameters of different preset gears based on the environment parameters includes:
correspondingly setting the precision parameters of the lighting equipment with different preset gears under the condition that the environment parameters are brightness parameters;
and correspondingly setting the camera resolution parameters of different preset gears under the condition that the environment parameters are image detail parameters.
In one possible embodiment, the determining the corresponding device processing algorithm parameter based on the image target parameter includes:
determining the complexity of the target object according to the target type and the target number of the target object;
determining algorithm complexity by taking the image proportion as weight according to the target object complexity;
and determining corresponding equipment processing algorithm parameters according to the algorithm complexity.
Fig. 5 is a schematic structural diagram of an image recognition apparatus for assisting driving according to an embodiment of the present invention, as shown in fig. 5, the apparatus includes a processor 201, a memory 202, an input device 203, and an output device 204; the number of the processors 201 in the device may be one or more, and one processor 201 is taken as an example in fig. 5; the processor 201, the memory 202, the input device 203 and the output device 204 in the apparatus may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example. The memory 202, which is a computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image recognition method in driving assistance in the embodiment of the present invention. The processor 201 executes various functional applications of the device and data processing by running software programs, instructions and modules stored in the memory 202, that is, implements the image recognition method in driving assistance described above. The input device 203 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the apparatus. The output device 204 may include a display device such as a display screen.
Embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method of image recognition in assisted driving, the method comprising:
the method comprises the steps of obtaining a plurality of basic encryption keys generated in advance, sequencing the basic encryption keys by adopting a preset sequencing rule to obtain a plurality of continuous basic encryption keys in a fixed sequence, and simultaneously storing the sequencing rule and decryption keys corresponding to the basic encryption keys in a data receiving end in advance;
in the auxiliary driving process, when an image acquired by a camera is identified, image scene detection of the image and target object identification of the image are carried out in real time;
acquiring corresponding environmental parameters based on the currently detected image scene and image target parameters obtained according to the result of target object identification;
and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
From the above description of the embodiments, it is obvious for those skilled in the art that the embodiments of the present invention can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better implementation in many cases. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to make a computer device (which may be a personal computer, a service, or a network device) perform the methods described in the embodiments of the present invention.
It should be noted that, in the embodiment of the image recognition device for driving assistance, the included units and modules are only divided according to the functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiment of the invention.
In some possible embodiments, various aspects of the methods provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the methods according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device, for example, the computer device may perform the image recognition method in assisted driving described in the embodiments of the present application. The program product may be implemented using any combination of one or more readable media.
It should be noted that the foregoing is only a preferred embodiment of the present invention and the technical principles applied. Those skilled in the art will appreciate that the embodiments of the present invention are not limited to the specific embodiments described herein, and that various obvious changes, adaptations, and substitutions are possible, without departing from the scope of the embodiments of the present invention. Therefore, although the embodiments of the present invention have been described in more detail through the above embodiments, the embodiments of the present invention are not limited to the above embodiments, and many other equivalent embodiments may be included without departing from the concept of the embodiments of the present invention, and the scope of the embodiments of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image recognition method in driving support, comprising:
in the auxiliary driving process, when an image collected by a camera is identified, image scene detection of the image and target object identification of the image are carried out in real time;
acquiring corresponding environmental parameters based on the currently detected image scene and image target parameters obtained according to the result of target object identification;
and determining equipment adjusting parameters based on the environmental parameters and the image target parameters, adjusting corresponding equipment parameters based on the equipment adjusting parameters, and identifying the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
2. The method according to claim 1, wherein the detecting an image scene of the image in real time comprises:
and detecting the image scene of the image through a scene detection model obtained by pre-training, wherein in the scene detection model training process, the scene detection model is obtained by training sample images based on different marked scenes, and the scenes comprise one or more of an expressway scene, an urban road scene, a night scene, a tunnel scene, a daytime scene, a congested scene and an idle scene.
3. The driving assistance image recognition method according to claim 1, wherein the acquiring of the corresponding environment parameter based on the currently detected image scene includes:
respectively determining the weight of each scene and a corresponding preset environment parameter value based on the currently detected image scene;
and calculating to obtain an environmental parameter value corresponding to the image acquired by the camera based on the weight and the preset environmental parameter value.
4. The image recognition method in driving support according to claim 1, wherein the image target parameter obtained from the result of target object recognition includes:
and determining the target type, the target number and the image proportion of the identified target object as image target parameters.
5. The in-vehicle image recognition method according to claim 4, wherein the determining of the device adjustment parameter based on the environmental parameter and the image target parameter includes:
determining equipment precision setting parameters of different preset gears based on the environment parameters;
and determining corresponding equipment processing algorithm parameters based on the image target parameters.
6. The image recognition method in driving assistance according to claim 5, wherein the determining of the device accuracy setting parameters for different preset gears based on the environmental parameters includes:
correspondingly setting the precision parameters of the lighting equipment with different preset gears under the condition that the environment parameters are brightness parameters;
and correspondingly setting the camera resolution parameters of different preset gears under the condition that the environment parameters are image detail parameters.
7. The method of image recognition in driving assistance according to claim 5, wherein the determining of the corresponding device processing algorithm parameter based on the image target parameter includes:
determining the complexity of the target object according to the target type and the target number of the target object;
determining algorithm complexity by taking the image proportion as weight according to the target object complexity;
and determining corresponding equipment processing algorithm parameters according to the algorithm complexity.
8. An image recognition device for assisting driving, comprising:
the image processing module is configured to detect an image scene of an image and identify a target object of the image in real time when the image acquired by the camera is identified in the driving assistance process;
the parameter determination module is configured to acquire corresponding environmental parameters based on a currently detected image scene and image target parameters obtained according to a target object identification result;
and the equipment adjusting module is configured to determine equipment adjusting parameters based on the environmental parameters and the image target parameters, adjust corresponding equipment parameters based on the equipment adjusting parameters, and recognize the acquired images after adjustment through an algorithm corresponding to the equipment adjusting parameters to obtain an auxiliary driving control instruction so as to perform auxiliary driving control.
9. An image recognition apparatus in driving assistance, the apparatus comprising: one or more processors; an in-driver-assistance image recognition apparatus that stores one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the in-driver-assistance image recognition method according to any one of claims 1 to 7.
10. A storage medium containing computer executable instructions for performing the image recognition method in assisted driving as claimed in any one of claims 1-7 when executed by a computer processor.
CN202211291258.2A 2022-10-21 2022-10-21 Image recognition method and device in auxiliary driving Active CN115376092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211291258.2A CN115376092B (en) 2022-10-21 2022-10-21 Image recognition method and device in auxiliary driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211291258.2A CN115376092B (en) 2022-10-21 2022-10-21 Image recognition method and device in auxiliary driving

Publications (2)

Publication Number Publication Date
CN115376092A true CN115376092A (en) 2022-11-22
CN115376092B CN115376092B (en) 2023-02-28

Family

ID=84073366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211291258.2A Active CN115376092B (en) 2022-10-21 2022-10-21 Image recognition method and device in auxiliary driving

Country Status (1)

Country Link
CN (1) CN115376092B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018103238A1 (en) * 2016-12-06 2018-06-14 深圳市元征科技股份有限公司 Method and device for automatically adjusting intra-vehicle environment
CN111856606A (en) * 2019-08-01 2020-10-30 上海保隆汽车科技股份有限公司 Forward-looking intelligent driving auxiliary device and method based on infrared thermal imaging
CN113525189A (en) * 2021-06-21 2021-10-22 位置互联(北京)科技有限公司 Automobile seat adjusting method, device, equipment and storage medium
CN114143429A (en) * 2021-11-30 2022-03-04 惠州Tcl移动通信有限公司 Image shooting method, image shooting device, electronic equipment and computer readable storage medium
WO2022133939A1 (en) * 2020-12-24 2022-06-30 深圳市大疆创新科技有限公司 Driving control method and device, automobile, and computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018103238A1 (en) * 2016-12-06 2018-06-14 深圳市元征科技股份有限公司 Method and device for automatically adjusting intra-vehicle environment
CN111856606A (en) * 2019-08-01 2020-10-30 上海保隆汽车科技股份有限公司 Forward-looking intelligent driving auxiliary device and method based on infrared thermal imaging
WO2022133939A1 (en) * 2020-12-24 2022-06-30 深圳市大疆创新科技有限公司 Driving control method and device, automobile, and computer-readable storage medium
CN113525189A (en) * 2021-06-21 2021-10-22 位置互联(北京)科技有限公司 Automobile seat adjusting method, device, equipment and storage medium
CN114143429A (en) * 2021-11-30 2022-03-04 惠州Tcl移动通信有限公司 Image shooting method, image shooting device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN115376092B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
EP3367303A1 (en) Autonomous driving image processing method and apparatus thereof
CN108082037A (en) Brake lamp detects
US11250279B2 (en) Generative adversarial network models for small roadway object detection
US20090185717A1 (en) Object detection system with improved object detection accuracy
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
CN112528807B (en) Method and device for predicting running track, electronic equipment and storage medium
CN110834667B (en) Vehicle steering control method and device, vehicle, terminal device and storage medium
CN112949578B (en) Vehicle lamp state identification method, device, equipment and storage medium
CN115056649A (en) Augmented reality head-up display system, implementation method, equipment and storage medium
CN112874519A (en) Control method and system for adaptive cruise, storage medium and electronic device
CN111444810A (en) Traffic light information identification method, device, equipment and storage medium
CN113734203A (en) Control method, device and system for intelligent driving and storage medium
CN111967377A (en) Method, device and equipment for identifying state of engineering vehicle and storage medium
CN114495060A (en) Road traffic marking identification method and device
CN115376092B (en) Image recognition method and device in auxiliary driving
CN112749602A (en) Target query method, device, equipment and storage medium
CN112164221B (en) Image data mining method, device and equipment and road side equipment
CN114882451A (en) Image processing method, device, equipment and medium
CN110177222B (en) Camera exposure parameter adjusting method and device combining idle resources of vehicle machine
CN112668437A (en) Vehicle braking method, device, equipment and storage medium
CN114103966A (en) Control method, device and system for driving assistance
CN109910891A (en) Control method for vehicle and device
CN115619975A (en) Well lid height difference identification method and system based on infrared binocular structured light
CN114758315A (en) Identification method of vehicle signal lamp, training method of identification model and related equipment
CN115376093A (en) Object prediction method and device in intelligent driving and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant