CN115841763B - Shooting control method and device based on demand identification in driving mode - Google Patents
Shooting control method and device based on demand identification in driving mode Download PDFInfo
- Publication number
- CN115841763B CN115841763B CN202211720089.XA CN202211720089A CN115841763B CN 115841763 B CN115841763 B CN 115841763B CN 202211720089 A CN202211720089 A CN 202211720089A CN 115841763 B CN115841763 B CN 115841763B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- smart phone
- terminal equipment
- phone support
- vehicles
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000012545 processing Methods 0.000 claims abstract description 50
- 230000015654 memory Effects 0.000 claims description 27
- 230000006870 function Effects 0.000 claims description 17
- 230000007613 environmental effect Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 10
- 238000009432 framing Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000001360 synchronised effect Effects 0.000 description 6
- 230000003993 interaction Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Traffic Control Systems (AREA)
Abstract
The application relates to the technical field of general data processing in the Internet industry, in particular to a shooting control method and device based on demand identification in a driving mode. The method is applied to an automatic driving domain controller on a first vehicle, and the first vehicle is further provided with a smart phone support and terminal equipment, and the method comprises the following steps: if it is determined that a third vehicle matched with the reference vehicle characteristic information exists around the first vehicle and terminal equipment exists on the smart phone support, determining a target clamping height corresponding to the smart phone support, sending a first instruction to the smart phone support to enable the smart phone support to be adjusted to the target clamping height, and sending a second instruction to the terminal equipment to enable the terminal equipment to start a video recording function of the camera and record the video of the third vehicle until the third vehicle does not exist in a view finding range of the camera of the terminal equipment. By adopting the method, the driving experience and the safety of the vehicle can be improved.
Description
Technical Field
The application relates to the technical field of general data processing in the Internet industry, in particular to a shooting control method and device based on demand identification in a driving mode.
Background
With the rapid development of scientific technology, the application of automobile electronic technology makes the automobile more intelligent, and improves the experience and life happiness of users. However, in the existing intelligent automobile, when a driver wants to shoot a vehicle of interest to form a short video in the driving process, the driver needs to take the mobile phone off the mobile phone support and start the shooting function to shoot, the whole process takes a long time, the best shooting time can be missed, the shooting effect is not ideal, and a certain potential safety hazard exists when the mobile phone is used for shooting in the driving process. Therefore, how to safely and high-quality photograph a vehicle of interest to a user during driving has become a current research hotspot.
Disclosure of Invention
The embodiment of the application provides a shooting control method and device based on demand recognition in a driving mode, wherein an automatic driving domain controller can identify and determine a target vehicle based on the demand of a user in the driving mode and control terminal equipment to shoot the target vehicle, so that the driving experience and safety of the user can be improved.
In a first aspect, an embodiment of the present application provides a method for controlling shooting based on demand recognition in a driving mode, where the method is applied to an autopilot domain controller on a first vehicle, and the first vehicle is further provided with a smart phone support and a terminal device, and the autopilot domain controller is connected with the smart phone support and the terminal device in a communication manner. The method comprises the following steps: and if the terminal equipment is detected to be in a charging state and the driving mode of the terminal equipment is detected to be in an opening state, acquiring the reference vehicle characteristic information of the reference vehicle provided by the user. And acquiring a current environment image of the first vehicle, which is acquired by a vehicle body domain controller on the first vehicle through a vehicle-mounted camera. And if one or more second vehicles exist around the first vehicle according to the environment image, extracting vehicle characteristic information of the one or more second vehicles, and determining whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles according to the reference vehicle characteristic information and the vehicle characteristic information of the one or more second vehicles. And if the third vehicle is determined to exist, determining whether the terminal equipment exists on the smart phone support. If it is determined that the terminal device exists on the smart phone support, determining a target clamping height corresponding to the smart phone support, and sending a first instruction to the smart phone support so that the smart phone support can be adjusted to the target clamping height, wherein the first instruction comprises the target clamping height, and when the clamping height of the smart phone support is the target clamping height, an original view finding range of a camera of the terminal device comprises the third vehicle. And sending a second instruction to the terminal equipment, so that the terminal equipment starts the video recording function of the camera and records the video of the third vehicle until the third vehicle does not exist in the view finding range of the camera of the terminal equipment, and stopping recording. If the automatic driving domain controller determines that the terminal equipment does not exist on the smart phone support, generating and outputting reminding information, wherein the reminding information is used for reminding the user to place the terminal equipment on the smart phone support; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is maintained.
In the embodiment of the application, the autopilot domain controller can judge whether a third vehicle matched with the reference vehicle characteristic information exists around the first vehicle according to the reference vehicle characteristic information of the reference vehicle provided by a user and the current environment image of the first vehicle acquired by the vehicle body domain controller on the first vehicle through the vehicle-mounted camera. Further, the autopilot domain controller may control a terminal device on the first vehicle to photograph the third vehicle. By adopting the method, the automatic driving domain controller can identify and control the terminal equipment to shoot the third vehicle based on the requirement of the user in the driving mode, so that the driving experience and the safety of the user can be improved.
With reference to the first aspect, in a feasible implementation manner, the determining a target clamping height corresponding to the smart phone support includes: and acquiring the current clamping height of the smart phone support and the gesture of the terminal equipment. And determining whether the terminal equipment is inverted according to the gesture of the terminal equipment. If the terminal equipment is determined not to be inverted, determining the size of the terminal equipment and the position parameters of the camera of the terminal equipment on the terminal equipment according to the preset terminal equipment model. And determining the target clamping height corresponding to the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment, the position parameter of the camera of the terminal equipment on the terminal equipment, the geometric model characteristic parameter between the pre-stored smart phone support base and the engine cover of the first vehicle, and the first relative position of the third vehicle and the smart phone support base.
With reference to the first aspect, in a feasible implementation manner, the determining, according to the current clamping height of the smart phone support, the unknown parameters of the terminal device and the camera of the terminal device, the feature parameters of the geometric model between the pre-stored smart phone support base and the hood of the first vehicle, and the first relative position between the third vehicle and the smart phone support base, the target clamping height corresponding to the smart phone support includes: and determining a second relative position of the camera of the terminal equipment and the base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment and the position parameter of the camera of the terminal equipment. And determining that the original view finding range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view finding range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smart phone support, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment. And determining a second clamping height range in which the actual framing effect is not blocked by the engine cover of the first vehicle according to the prestored geometric model characteristic parameters between the intelligent mobile phone support base and the engine cover of the first vehicle and the framing range corresponding to the first clamping height range. And selecting the minimum value in the second clamping height range as a target clamping height.
With reference to the first aspect, in a possible implementation manner, the determining, according to the second relative position, the first relative position of the third vehicle and the smartphone support base, the stroke of the smartphone support, and the shooting parameter of the camera of the terminal device, an original view finding range of the camera of the terminal device includes a first clamping height range of the third vehicle and a view finding range corresponding to the first clamping height range includes: and determining a relation set between the clamping height of the smart phone support and the camera view finding range of the terminal equipment according to the second relative position, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment. And determining that the original view finding range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle according to the first relative position of the relation set, wherein the view finding range corresponds to the first clamping height range.
With reference to the first aspect, in a possible implementation manner, the acquiring the reference vehicle feature information of the reference vehicle provided by the user includes: the number of reference vehicles provided by the user is obtained. And if the number of the reference vehicles provided by the user is 1, acquiring the vehicle brands and the vehicle outlines of the reference vehicles, and determining the vehicle brands and the vehicle outlines of the reference vehicles as the reference vehicle characteristic information of the reference vehicles.
With reference to the first aspect, in a possible implementation manner, the method further includes: and if the number of the reference vehicles provided by the user is determined to be greater than 1, acquiring the vehicle brand of each reference vehicle in the plurality of reference vehicles. And determining a target vehicle brand according to the vehicle brand of each reference vehicle in the plurality of reference vehicles, wherein the duty ratio of the reference vehicles which belong to the target vehicle brand in the plurality of reference vehicles is equal to or larger than a preset duty ratio. And acquiring the vehicle contour of each reference vehicle in the plurality of reference vehicles, and determining the target vehicle contour corresponding to the plurality of reference vehicles according to the vehicle contour of each reference vehicle. And determining the reference vehicle characteristic information of the reference vehicle according to the brand of the target vehicle and the outline of the target vehicle.
With reference to the first aspect, in a possible implementation manner, the determining a target vehicle brand according to the vehicle brands of the plurality of reference vehicles includes: dividing the plurality of reference vehicles into N1 first reference vehicle sets according to the reference vehicle brands, wherein the vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, the reference vehicles contained in the first reference vehicle sets are the same as the vehicle brands corresponding to the first reference vehicle sets, and N1 is a positive integer equal to or greater than 1. And determining N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of the reference vehicles contained in each first reference vehicle set, wherein the duty ratio of the reference vehicles contained in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset duty ratio, and N2 is a positive integer equal to or greater than 1. And determining the vehicle brand corresponding to each target reference vehicle set in the N2 target reference vehicle sets as a target vehicle brand.
With reference to the first aspect, in a possible implementation manner, the determining, according to a vehicle profile of the reference vehicle, a target profile corresponding to the plurality of reference vehicles includes: and if the plurality of reference vehicles are two reference vehicles, judging whether the similarity of the vehicle profiles of the two reference vehicles is larger than a first preset similarity. And if the similarity of the vehicle profiles of the two reference vehicles is equal to or greater than the first preset similarity, determining the vehicle profile of any one of the two reference vehicles as a target vehicle profile. And if the similarity of the vehicle profiles of the two reference vehicles is smaller than the first preset similarity, determining the vehicle profiles of the two reference vehicles as target vehicle profiles.
With reference to the first aspect, in a possible implementation manner, the method further includes: if the plurality of reference vehicles are three or more reference vehicles, judging whether the similarity of the vehicle profiles of any two reference vehicles in the three or more reference vehicles is smaller than the first preset similarity. And if the similarity of the vehicle profiles of any two reference vehicles in the three or more reference vehicles is less than the first preset similarity, determining the vehicle profiles of the three or more reference vehicles as target vehicle profiles. And if the similarity of the vehicle profiles of any two reference vehicles in the three or more reference vehicles is not smaller than the first preset similarity, determining one or more second reference vehicle sets from the three or more reference vehicles according to the vehicle profiles of the three or more reference vehicles, wherein each second reference vehicle set in the one or more second reference vehicle sets comprises at least two reference vehicles, and the similarity of the vehicle profiles of any two reference vehicles in the at least two reference vehicles in each second reference vehicle set is equal to or larger than the first preset similarity. And determining the vehicle contour corresponding to each second reference vehicle set according to the vehicle contour of the reference vehicle contained in each second reference vehicle set in the one or more second reference vehicle sets, and determining the vehicle contour corresponding to each second reference vehicle set as a target vehicle contour.
With reference to the first aspect, in a possible implementation manner, the determining, according to the reference vehicle feature information and the vehicle feature information of the one or more second vehicles, whether there is a third vehicle matching the reference vehicle feature information in the one or more second vehicles includes: performing the following determination operations on any one of the one or more second vehicles: and acquiring the vehicle brand and the vehicle outline of any second vehicle. And judging whether the vehicle brand and the vehicle outline of any second vehicle are matched with the reference vehicle characteristic information. And if the brand and the outline of the vehicle of any second vehicle are matched with the reference vehicle characteristic information, determining any second vehicle as a third vehicle. The determination operation is performed on the one or more second vehicles, respectively, to determine whether a third vehicle that matches the reference vehicle characteristic information exists among the one or more second vehicles.
With reference to the first aspect, in a possible implementation manner, the determining whether the vehicle brand and the vehicle contour of the second vehicle match the reference vehicle feature information includes: and if the vehicle brand of any second vehicle is identical to the target vehicle brand, and the similarity between the vehicle contour of any second vehicle and the target vehicle contour is equal to or greater than a second preset similarity, determining that the vehicle brand and the vehicle contour of any second vehicle are matched with the reference vehicle characteristic information.
With reference to the first aspect, in a possible implementation manner, the method further includes: and realizing synchronous video recording with the cameras of the terminal equipment for the third vehicle through the vehicle-mounted cameras on the first vehicle. And taking the time stamp as a mark to obtain target video content aiming at the third vehicle according to the video content of the vehicle-mounted camera to the third vehicle and the video content of the camera of the terminal equipment.
In a second aspect, an embodiment of the present invention provides a photographing control apparatus based on demand recognition in a driving mode. The device comprises: and the acquisition unit is used for acquiring the reference vehicle characteristic information of the reference vehicle provided by the user if the terminal equipment is detected to be in a charging state and the driving mode of the terminal equipment is in an opening state. The acquisition unit is used for acquiring the current environment image of the first vehicle, which is acquired by the vehicle body domain controller on the first vehicle through the vehicle-mounted camera. And the processing unit is used for extracting vehicle characteristic information of one or more second vehicles if one or more second vehicles exist around the first vehicle according to the environment image, and determining whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles according to the reference vehicle characteristic information and the vehicle characteristic information of the one or more second vehicles. And the determining unit is used for determining whether the terminal equipment exists on the smart phone support if the third vehicle exists. The processing unit is configured to determine, if it is determined that the terminal device exists on the smart phone support, a target clamping height corresponding to the smart phone support, and send a first instruction to the smart phone support, so that the smart phone support is adjusted to the target clamping height, and the terminal device sends a second instruction, so that the terminal device starts a video recording function of a camera and records a video of the third vehicle, until the third vehicle does not exist in a view finding range of the camera of the terminal device, and stop recording the video, where the first instruction includes the target clamping height, and when the clamping height of the smart phone support is the target clamping height, an original view finding range of the camera of the terminal device includes the third vehicle. The processing unit is used for generating and outputting reminding information if the terminal equipment does not exist on the smart phone support; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is maintained, wherein the reminding information is used for reminding the user of placing the terminal equipment on the smart phone support.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: the acquisition unit is used for acquiring the current clamping height of the smart phone support and the gesture of the terminal equipment. And the determining unit is used for determining whether the terminal equipment is inverted according to the gesture of the terminal equipment. And the processing unit is used for determining the size of the terminal equipment and the position of the camera of the terminal equipment on the terminal equipment according to the preset terminal equipment model if the terminal equipment is not inverted. The processing unit is used for determining the target clamping height corresponding to the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment, the position parameter of the camera of the terminal equipment on the terminal equipment, the geometric model characteristic parameter between the pre-stored smart phone support base and the engine cover of the first vehicle and the first relative position of the third vehicle and the smart phone support base.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes: and selecting a unit. And the processing unit is used for determining a second relative position of the camera of the terminal equipment and the base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment and the position parameter of the camera of the terminal equipment on the terminal equipment. The processing unit is used for determining that the original view finding range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view finding range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smart phone support, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment. And the processing unit is used for determining a second clamping height range in which the actual framing effect is not blocked by the engine cover of the first vehicle according to the prestored geometric model characteristic parameters between the intelligent mobile phone support base and the engine cover of the first vehicle and the framing range corresponding to the first clamping height range. And the selecting unit is used for selecting the minimum value in the second clamping height range as the target clamping height.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: and the processing unit is used for determining a relation set between the clamping height of the smart phone support and the camera view finding range of the terminal equipment according to the second relative position, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment. And the processing unit is used for determining that the original view finding range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view finding range corresponding to the first clamping height range according to the relation set and the first relative position. With reference to the second aspect, in a possible implementation manner, the apparatus includes: and the acquisition unit is used for acquiring the number of the reference vehicles provided by the user. And the processing unit is used for acquiring the vehicle brand and the vehicle outline of the reference vehicle and determining the vehicle brand and the vehicle outline of the reference vehicle as the reference vehicle characteristic information of the reference vehicle if the number of the reference vehicles provided by the user is 1.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes a determining unit. And the processing unit is used for acquiring the vehicle brand of each reference vehicle in the plurality of reference vehicles if the number of the reference vehicles provided by the user is determined to be more than 1. And the determining unit is used for determining a target vehicle brand according to the vehicle brand of each reference vehicle in the plurality of reference vehicles, wherein the duty ratio of the plurality of reference vehicles and the reference vehicle belonging to the target vehicle brand is equal to or larger than a preset duty ratio. And the processing unit is used for acquiring the vehicle contour of each reference vehicle in the plurality of reference vehicles and determining the target vehicle contour corresponding to the plurality of reference vehicles according to the vehicle contour of each reference vehicle. And the determining unit is used for determining that the reference vehicle is the reference vehicle characteristic information according to the brand of the target vehicle and the outline of the target vehicle.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: the processing unit is used for dividing the plurality of reference vehicles into N1 first reference vehicle sets according to the vehicle brands of the reference vehicles, wherein the vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, the reference vehicles contained in the first reference vehicle sets belong to the vehicle brands corresponding to the first reference vehicle sets, and N1 is a positive integer equal to or greater than 1. And the determining unit is used for determining N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of the reference vehicles contained in each first reference vehicle set, wherein the duty ratio of the reference vehicles contained in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset duty ratio, and N2 is a positive integer equal to or greater than 1. And the determining unit is used for determining the vehicle brand corresponding to each target reference vehicle set in the N2 target reference vehicle sets as a target vehicle brand.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: and the processing unit is used for judging whether the similarity of the vehicle profiles of the two reference vehicles is greater than a first preset similarity or not if the plurality of reference vehicles are determined to be the two reference vehicles. And a determining unit configured to determine a vehicle profile of any one of the two reference vehicles as a target vehicle profile if it is determined that the similarity of the vehicle profiles of the two reference vehicles is equal to or greater than the first preset similarity. And a determining unit that determines the vehicle profiles of the two reference vehicles as target vehicle profiles if the similarity of the vehicle profiles of the two reference vehicles is determined to be smaller than the first preset similarity.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: and the processing unit is used for judging whether the similarity of the vehicle profiles of any two reference vehicles in the three or more reference vehicles is smaller than the first preset similarity or not if the plurality of reference vehicles are three or more reference vehicles. And the determining unit is used for determining the vehicle profiles of any two reference vehicles in the three or more reference vehicles as target vehicle profiles if the similarity of the vehicle profiles of the three or more reference vehicles is less than the first preset similarity. A determining unit, configured to determine one or more second reference vehicle sets from the three or more reference vehicles according to vehicle profiles of the three or more reference vehicles if it is determined that the similarity of vehicle profiles of any two reference vehicles in the three or more reference vehicles is not less than the first preset similarity, where the similarity of vehicle profiles of any two reference vehicles in at least two reference vehicles included in the one or more second reference vehicle sets is equal to or greater than the first preset similarity. And the determining unit is used for determining the vehicle contour corresponding to each second reference vehicle set according to the vehicle contour of the reference vehicle contained in each second reference vehicle set in the one or more second reference vehicle sets, and determining the vehicle contour corresponding to each second reference vehicle set as a target vehicle contour.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: and a processing unit configured to perform the following determination operation on one or more second vehicles. And the acquisition unit is used for acquiring the vehicle brand and the vehicle outline of any second vehicle. And the processing unit is used for judging whether the vehicle brand and the vehicle outline of any second vehicle are matched with the reference vehicle characteristic information. And the determining unit is used for determining any second vehicle as a third vehicle if the vehicle brand and the vehicle outline of any second vehicle are determined to be matched with the reference vehicle characteristic information. And a processing unit configured to perform the determination operation on the one or more second vehicles, respectively, to determine whether a third vehicle that matches the reference vehicle characteristic information exists in the one or more second vehicles.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: and the determining unit is used for determining that the vehicle brand and the vehicle contour of any second vehicle are matched with the reference vehicle characteristic information if the vehicle brand of any second vehicle is determined to be the same as the target vehicle brand and the similarity between the vehicle contour of any second vehicle and the target vehicle contour is determined to be equal to or larger than a second preset similarity.
With reference to the second aspect, in a possible implementation manner, the apparatus includes: and the processing unit is used for realizing synchronous video recording with the cameras of the terminal equipment for the third vehicle through the vehicle-mounted cameras on the first vehicle. And the processing unit is used for obtaining target video contents aiming at the third vehicle according to the video contents of the vehicle-mounted camera on the third vehicle and the video contents of the camera of the terminal equipment by taking the timestamp as a mark.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium is configured to store a computer program, where the computer program when executed on a computer causes the computer to execute a shooting method provided by any one of the possible implementation manners of the first aspect, and also implements the beneficial effects provided by the shooting method provided by the first aspect.
In a fourth aspect, embodiments of the present application provide an electronic device that may include a processor and a memory, where the processor and the memory are interconnected. The memory is configured to store a computer program, and the processor is configured to execute the computer program to implement the photographing method provided in the first aspect, and also implement the beneficial effects of the photographing method provided in the first aspect.
By implementing the embodiment of the application, the autopilot controller can judge whether the third vehicle matched with the reference vehicle characteristic information exists around the first vehicle according to the reference vehicle characteristic information of the reference vehicle provided by the user and the current environment image of the first vehicle. Further, the autopilot domain controller may control a terminal device on the first vehicle to photograph the third vehicle. By the shooting method, the automatic driving domain controller can identify and control the terminal device to shoot the third vehicle based on the requirement of the user in the driving mode, so that the driving experience and the safety of the user can be improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flowchart of a shooting control method based on demand recognition in a driving mode according to an embodiment of the present application;
Fig. 2 is a schematic diagram of determining a target height of a smart phone stand according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a shooting control device based on demand recognition in a driving mode according to an embodiment of the present application;
fig. 4 is a schematic diagram of still another configuration of a photographing control apparatus based on demand recognition in a driving mode according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps is not limited to the elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the existing intelligent automobile, when a driver wants to use a mobile phone to shoot an interested vehicle to form a short video under a driving environment, the mobile phone needs to be taken down from a mobile phone support and a camera function is started to shoot, the whole process is long in time consumption, the best shooting time can be missed, and certain potential safety hazards exist, so that user experience is affected. Therefore, the technical problems to be solved by the application are as follows: how to safely and high-quality photograph a vehicle of interest to a user during driving.
In the embodiment of the application, the terminal equipment can be an intelligent terminal such as a smart phone, a tablet personal computer, a wearable device and the like which can be used by a user for shooting, and the implementation form of the terminal equipment is not particularly limited. The autopilot domain controller bears the data processing and calculating capacity required by autopilot, including but not limited to millimeter wave radar, cameras, laser radar, GPS and other devices, and also bears the security of the underlying core data and networking data under autopilot. The smart phone support is a mobile phone support with functions of instruction receiving and sending, height adjusting and the like, and the implementation form of the smart phone support is not particularly limited.
Referring to fig. 1, fig. 1 is a schematic flow chart of a shooting control method based on demand recognition in a driving mode, which is provided in an embodiment of the present application, and the method may be applied to an autopilot domain controller on a first vehicle, where the first vehicle is further provided with a smart phone support and a terminal device, and the autopilot domain controller is in communication connection with the smart phone support and the terminal device. Here, the first vehicle may be a vehicle that the user is driving. As shown in fig. 1, the method may specifically include the following steps:
s101, if the terminal equipment is detected to be in a charging state and the driving mode of the terminal equipment is detected to be in an opening state, acquiring reference vehicle characteristic information of a reference vehicle provided by a user.
In some possible embodiments, the autopilot controller detects that the terminal device is in a charging state and a driving mode of the terminal device is in an on state, and the autopilot controller may acquire reference vehicle feature information of a reference vehicle provided by a user. Here, in the embodiment of the present application, the reference vehicle may be a reference vehicle that is indicated by a user as being interested in and wants to be photographed, and the reference vehicle characteristic information may include a vehicle brand and a vehicle profile of the reference vehicle. Here, the terminal device is guaranteed to be in a charging state, so as to ensure that the terminal device has enough electric quantity to support a driving mode and the follow-up video recording function is started.
Here, it should also be noted that the driving mode may be turned on or off by the terminal device according to a user instruction received by the terminal device. Specifically, the user and the terminal device can interact through the human-computer interaction interface, so that the driving mode is started or closed.
In a specific implementation, the autopilot domain controller may send the query information to the terminal device. The autopilot domain controller may then determine whether the terminal device is in a charged state and whether a driving mode of the terminal device is in an on state based on feedback information of the terminal device for the query information.
Further, in the case that the autopilot controller detects that the terminal device is in a charging state and the driving mode of the terminal device is in an on state, the autopilot controller may first acquire the reference vehicle provided by the user.
Alternatively, the autopilot domain controller may receive user input in a human-machine interaction interface on the terminal device or vehicle indicating the relevant content of the reference vehicle. Further, the autopilot controller may determine a reference vehicle of interest to the user based on the related content indicating the reference vehicle.
Optionally, the autopilot domain controller may obtain, based on user authorization, search content and browse records for the vehicle daily by the user. Further, the autopilot controller may analyze and determine a reference vehicle of interest to the user.
Alternatively, the autopilot domain controller may provide a plurality of preselected reference vehicles at the human-machine interface. The autopilot controller may then determine a reference vehicle of interest to the user based on the user's selection instruction. It should be noted that the pre-selected reference vehicle may be obtained by the user's daily search content for the vehicle and browse record analysis based on the user's authorization. The preselected reference vehicle may also be a vehicle that contains all brands.
Further, the autopilot domain controller may obtain reference vehicle characteristic information for a reference vehicle. Here, the reference vehicle characteristic information may be information used to characterize and determine a certain reference vehicle or a certain class of reference vehicles. In the embodiment of the application, the reference vehicle characteristic information may be a vehicle brand of the reference vehicle and a vehicle profile of the reference vehicle.
Alternatively, the autopilot controller may directly obtain the brand of the vehicle from which the reference vehicle is derived based on information about the particular brand of vehicle selected by the user. Then, the autopilot controller may obtain an official vehicle picture corresponding to the reference vehicle according to the information of the reference vehicle. Further, the autopilot controller may identify and obtain the vehicle profile of the reference vehicle through techniques such as image recognition. Further, the autopilot domain controller may determine a vehicle brand and a vehicle profile corresponding to the reference vehicle as the reference vehicle characteristic information.
In an alternative embodiment, the autopilot controller may obtain a number of reference vehicles provided by the user. If the autopilot controller determines that the number of reference vehicles provided by the user is 1, that is, there is one reference vehicle, the autopilot controller may acquire the vehicle brand and the vehicle contour of the reference vehicle. Further, the autopilot controller may determine a vehicle brand and a vehicle profile of the reference vehicle as reference vehicle characteristic information of the reference vehicle.
In another alternative embodiment, the autopilot controller may obtain a number of reference vehicles provided by the user. If the autopilot controller determines that the number of the reference vehicles provided by the user is greater than 1, that is, a plurality of reference vehicles exist, the autopilot controller can acquire vehicle brands of the plurality of reference vehicles. The autopilot controller may then determine a target vehicle brand from the vehicle brands of each of the plurality of reference vehicle brands. Here, the duty ratio of the reference vehicles belonging to the target vehicle brand among the plurality of reference vehicles is equal to or larger than the preset duty ratio. The autopilot controller may also obtain a vehicle profile of each of the plurality of reference vehicles, and the autopilot controller may determine a target vehicle profile corresponding to the plurality of reference vehicles based on the vehicle profile of each reference vehicle. Further, the autopilot controller may determine the reference vehicle characteristic information based on the target vehicle brand and the target vehicle profile.
Alternatively, the autopilot controller may divide the plurality of reference vehicles into N1 first reference vehicle sets according to vehicle brands of the respective reference vehicles. Then, the autopilot controller may determine N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of vehicles of the reference vehicles included in each first reference vehicle set. Further, the autopilot domain controller may determine a vehicle brand corresponding to each of the N2 target reference vehicle sets as the target vehicle brand. Here, the vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, the reference vehicle contained in each first reference vehicle set is the same as the vehicle brand corresponding to each first reference vehicle set, and the duty ratio of the reference vehicle contained in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than the preset duty ratio. Here, N1 and N2 are both positive integers equal to or greater than 1.
For example, it is assumed here that five reference vehicles, reference vehicle a, reference vehicle B, reference vehicle C, reference vehicle D, and reference vehicle E, whose vehicle brands correspond to vehicle brand 1, vehicle brand 2, vehicle brand 1, and vehicle brand 2, respectively. Assume that the preset duty cycle is 50%. The autopilot domain controller may divide the five reference vehicles into 2 first reference vehicle sets, where N1 is 2, according to their vehicle brands. Specifically, the first reference vehicle set 1 includes a reference vehicle a, a reference vehicle C, and a reference vehicle D, and the second reference vehicle set 2 includes a reference vehicle B and a reference vehicle E. The first reference vehicle set 1 includes a reference vehicle having a duty ratio of 60% and the first reference vehicle set 2 includes a reference vehicle having a duty ratio of 40%. Since the reference vehicle included in the first reference vehicle set 1 has a 60% duty ratio greater than the preset duty ratio 50% and the reference vehicle included in the first reference vehicle set 2 has a 40% duty ratio less than the preset duty ratio 50%, the autopilot controller may determine the vehicle brand 1 corresponding to the first reference vehicle set 1 as the target vehicle brand.
For another example, it is assumed here that four reference vehicles, reference vehicle a, reference vehicle B, reference vehicle C, and reference vehicle D, whose vehicle brands correspond to vehicle brand 1, vehicle brand 2, and vehicle brand 1, respectively. Assume that the preset duty cycle is 50%. The autopilot domain controller may divide the four reference vehicles into 2 first reference vehicle sets, where N1 is 2, according to their vehicle brands. Specifically, the first reference vehicle set 1 includes a reference vehicle a and a reference vehicle D, and the first reference vehicle set 2 includes a reference vehicle B and a reference vehicle C. Wherein, the duty ratio of the reference vehicles contained in the first reference vehicle set 1 and the first reference vehicle set 2 is 50%. Since the duty ratios of the reference vehicles included in the first reference vehicle group 1 and the first reference vehicle group 2 are equal to the preset duty ratio 50%, the automatic driving area controller may determine both the vehicle brand 1 corresponding to the first reference vehicle group 1 and the vehicle brand 2 corresponding to the first reference vehicle group 2 as the target vehicle brand.
Optionally, if the autopilot controller determines that the plurality of reference vehicles are two reference vehicles, the autopilot controller further determines whether the similarity of the vehicle profiles of the two reference vehicles is greater than a first preset similarity. The autonomous domain controller may determine the vehicle profile of any one of the two reference vehicles as the target vehicle profile if it is determined that the similarity of the vehicle profiles of the two reference vehicles is equal to or greater than a first preset similarity. If the autopilot domain controller determines that the vehicle profile similarity of the two reference vehicles is less than the first preset similarity, the autopilot domain controller may determine the vehicle profiles of the two reference vehicles as target vehicle profiles.
For example, it is assumed here that there are a reference vehicle a and a reference vehicle B, the vehicle profiles of which correspond to a vehicle profile 1 and a vehicle profile 2, respectively, and the similarity of the vehicle profile 1 and the vehicle profile 2 is 70%. Assume that the first preset similarity is 60%. The autonomous domain controller determines that the similarity 70% of the vehicle profiles of the reference vehicle a and the reference vehicle B is greater than the first preset similarity 60%, the autonomous domain controller may determine the vehicle profile 1 of the reference vehicle a of the two reference vehicles as the target vehicle profile.
For another example, it is assumed here that there are a reference vehicle a and a reference vehicle B, the vehicle profiles of which correspond to the vehicle profile 1 and the vehicle profile 2, respectively, and the similarity of the vehicle profile 1 and the vehicle profile 2 is 40%. Assume that the first preset similarity is 60%. The autonomous domain controller may determine that the similarity 40% of the vehicle profiles of the reference vehicle a and the reference vehicle B is less than the first preset similarity 60%, and the autonomous domain controller may determine both the vehicle profile 1 of the reference vehicle a and the vehicle profile 2 of the reference vehicle B as the target vehicle profile.
Optionally, if the autopilot controller determines that the plurality of reference vehicles are three or more reference vehicles, the autopilot controller further determines whether the similarity of the vehicle profiles of any two of the three or more reference vehicles is less than a first preset similarity. The autonomous domain controller may determine the vehicle profiles of any two of the three or more reference vehicles as the target vehicle profile if it is determined that the similarity of the vehicle profiles of the three or more reference vehicles is less than the first preset similarity. If the autopilot controller determines that the similarity of the vehicle profiles of any two of the three or more reference vehicles is not less than the first preset similarity, the autopilot controller may determine one or more second reference vehicles from the three or more reference vehicles according to the vehicle profiles of the three or more reference vehicles. Further, the autopilot controller may determine a vehicle profile corresponding to each second reference vehicle set according to the vehicle profiles of the reference vehicles included in each second reference vehicle set in the one or more second reference vehicle sets, and determine the vehicle profile corresponding to each second reference vehicle set as the target vehicle profile. Specifically, for any one of the one or more second reference vehicle sets, the autopilot domain controller may determine a vehicle profile of any one of the second reference vehicle sets as the target vehicle profile. Here, each of the one or more second reference vehicle sets includes at least two reference vehicles, and a similarity of vehicle profiles of any two reference vehicles of the at least two reference vehicles included in each second reference vehicle set is equal to or greater than a first preset similarity.
For example, it is assumed here that there are a reference vehicle a, a reference vehicle B, and a reference vehicle C, the vehicle profiles of which correspond to a vehicle profile 1, a vehicle profile 2, and a vehicle profile 3, respectively, the similarity of the vehicle profile 1 and the vehicle profile 2 being 40%, the similarity of the vehicle profile 1 and the vehicle profile 3 being 50%, and the similarity of the vehicle profile 2 and the vehicle profile 3 being 30%. Assume that the first preset similarity is 60%. The autopilot controller may determine that the similarity of the vehicle profiles of any two of the three reference vehicles is less than 60% of the first preset similarity, and the autopilot controller may determine the vehicle profile 1 of the reference vehicle a, the vehicle profile 2 of the reference vehicle B, and the vehicle profile 3 of the reference vehicle C as the target vehicle profiles.
For another example, it is assumed here that there are a reference vehicle a, a reference vehicle B, and a reference vehicle C, the vehicle profiles of the three reference vehicles corresponding to a vehicle profile 1, a vehicle profile 2, and a vehicle profile 3, respectively, the similarity of the vehicle profile 1 and the vehicle profile 2 being 80%, the similarity of the vehicle profile 1 and the vehicle profile 3 being 50%, and the similarity of the vehicle profile 2 and the vehicle profile 3 being 40%. Assume that the first preset similarity is 60%. The autopilot controller may determine that the vehicle profile 1 and the vehicle profile 2 have a similarity 80% greater than a first predetermined similarity 60%, and the autopilot controller may determine a second set of reference vehicles from the three reference vehicles based on the vehicle profiles of the three reference vehicles, the second set of reference vehicles including reference vehicle a and reference vehicle B. Further, the autopilot controller may determine the vehicle contour 1 of the reference vehicle a in the second reference vehicle set as the target vehicle contour.
S102, acquiring a current environment image of the first vehicle, which is acquired by a vehicle body domain controller on the first vehicle through a vehicle-mounted camera.
In some possible embodiments, the autopilot domain controller may acquire a current environmental image of the first vehicle acquired by a body domain controller on the first vehicle via an onboard camera. The environmental image may be an image of an object including a vehicle, a person, a tree, a building, or the like around the first vehicle, which is obtained by capturing the surrounding environment of the vehicle.
In a specific implementation, the autopilot domain controller can start the vehicle-mounted camera through the vehicle body domain controller on the first vehicle to shoot the surrounding environment of the first vehicle so as to obtain the current environment image of the first vehicle.
And S103, if one or more second vehicles exist around the first vehicle according to the environment image, extracting vehicle characteristic information of the one or more second vehicles, and determining whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles according to the reference vehicle characteristic information and the vehicle characteristic information of the one or more second vehicles.
In some possible embodiments, if the autopilot controller determines that one or more second vehicles exist around the first vehicle according to the environmental image, the autopilot controller may extract vehicle characteristic information of the one or more second vehicles, and determine whether a third vehicle matching the reference vehicle characteristic information exists in the one or more second vehicles according to the vehicle characteristic information of the reference vehicle and the vehicle characteristic information of the one or more second vehicles.
In particular implementations, the autopilot controller performs a determining operation on each of the one or more second vehicles to determine whether a third vehicle of the one or more second vehicles matches the reference vehicle characteristic information. An exemplary description will be given below of any one of the one or more second vehicles. The autopilot controller may obtain the vehicle brand and vehicle profile of any second vehicle. The autopilot controller may then determine whether the vehicle brand and vehicle profile of the any second vehicle matches the reference vehicle characteristic information.
Further, the autopilot controller may determine the any one of the second vehicles as a third vehicle if it determines that the vehicle brand and vehicle profile of the any one of the second vehicles matches the reference vehicle characteristic information. Specifically, if the autopilot controller determines that the vehicle brand of the any second vehicle is the same as the target vehicle brand, and determines that the similarity between the vehicle contour of the any second vehicle and the target vehicle contour is equal to or greater than a second preset similarity, the autopilot controller may determine that the vehicle brand and the vehicle contour of the any second vehicle match the reference vehicle feature information.
By way of example, it is assumed here that the target vehicle brand contained in the reference vehicle characteristic information of the reference vehicle is a vehicle brand 1, and the target vehicle contour is a vehicle contour 1. Assume that the second preset similarity is 60%. It is assumed that the environment image includes a second vehicle, the vehicle brand of which is vehicle brand 1, and the vehicle contour is vehicle contour 2. Let the similarity between the vehicle profile 1 and the vehicle profile 2 be 80%. The autopilot controller may acquire that the vehicle brand of the second vehicle contained in the environmental image is vehicle brand 1 and the vehicle contour is vehicle contour 2. Since the vehicle brand 1 of the third vehicle is identical to the vehicle brand 1 of the target vehicle brand, and the similarity 80% of the vehicle contour 2 of the third vehicle to the vehicle contour 1 of the target vehicle contour is greater than the second preset similarity 60%, the automated driving domain may determine that the vehicle brand and the vehicle contour of the second vehicle match the reference vehicle characteristic information.
And S104, if the third vehicle is determined to exist, determining whether terminal equipment exists on the smart phone support.
In some possible embodiments, if the autopilot controller determines that there is a third vehicle that matches the reference vehicle characteristic information, it may be determined whether there is a terminal device on the smartphone mount.
In a specific implementation, the autopilot domain controller may send query information to the smartphone mount. Further, the autopilot domain controller may determine whether a terminal device is present on the smart phone holder based on feedback information of the smart phone holder for the query information.
S105, if it is determined that the terminal equipment exists on the smart phone support, determining a target clamping height corresponding to the smart phone support, sending a first instruction to the smart phone support to enable the smart phone support to be adjusted to the target clamping height, and sending a second instruction to the terminal equipment to enable the terminal equipment to start a video recording function of the camera and record the video of the third vehicle until the third vehicle does not exist in a view finding range of the camera of the terminal equipment.
In some possible embodiments, if it is determined that the terminal device exists on the smart phone support, the autopilot domain controller may determine a target clamping height corresponding to the smart phone support, send a first instruction to the smart phone support, so that the smart phone support adjusts to the target clamping height, and send a second instruction to the terminal device, so that the terminal device starts a video recording function of the camera and records a video of the third vehicle, until the third vehicle does not exist in a view finding range of the camera of the terminal device, and stop recording the video. Here, the first command includes a target gripping height. When the clamping height of the smart phone support is the target clamping height, the original view finding range of the terminal equipment comprises a third vehicle.
In an alternative embodiment, if the autopilot domain controller determines that the terminal device exists on the smart phone support, the autopilot domain controller may obtain the current clamping height of the smart phone support and the gesture of the terminal device. Further, the autopilot domain controller may determine whether the terminal device is inverted based on the pose of the terminal device. Then, if the autopilot domain controller determines that the terminal device is not inverted, the autopilot domain controller can determine the size of the terminal device and the position parameters of the camera in the terminal device according to the preset terminal device model.
Further, the autopilot domain controller may determine that the original view-finding range of the camera of the terminal device includes the target gripping height of the third vehicle according to the current gripping height of the smart phone, the size of the terminal device, the position parameter of the camera at the terminal device, the pre-stored geometric model feature parameter between the smart phone support base and the hood of the first vehicle, and the first relative position of the third vehicle and the smart phone support base. Here, the geometric model feature parameters between the smart phone holder base and the hood of the first vehicle may include a relative position, a relative height, a relative distance, and the like of the smart phone holder base and the hood support of the first vehicle.
In specific implementation, the autopilot domain controller can determine the second relative position of the camera of the terminal device and the base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal device and the position parameter of the camera of the terminal device. Then, the autopilot domain controller may determine that the original view-finding range of the terminal device includes a first clamping height range of the third vehicle and a view-finding range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the smartphone support base, the smartphone support travel, and the photographing parameters of the camera of the terminal device.
Further, the autopilot domain controller may determine a second clamping height range in which an actual viewing effect is not blocked by the hood of the first vehicle according to a pre-stored geometric model feature parameter between the smart phone support base and the hood of the first vehicle and a viewing range corresponding to the first clamping height range. Here, it should be noted that the stroke of the smart phone holder refers to a range of the clamping height that the smart phone holder can adjust. The original view finding range of the terminal equipment refers to the view finding range which can be shot by the camera when the position of the terminal equipment is kept still after the video/shooting function of the camera of the terminal equipment is started. Specifically, the autopilot domain controller may determine a set of relationships between the clamping height of the smart phone support and the camera view range of the terminal device according to the second relative position, the stroke of the smart phone support, and the shooting parameters of the camera of the terminal device. Further, the autopilot domain controller may determine, according to the set of relationships and the first relative position, that an original viewing range of the camera of the terminal device includes a first clamping height range of the third vehicle and a viewing range corresponding to the first clamping height range. Here, the shooting parameters of the camera of the terminal device may include a pixel value, an aperture size, and the like of the camera of the terminal device.
The position parameter of the camera at the terminal device is a parameter used for representing a specific position of the camera at the terminal device, and may include azimuth information and/or size information of the camera. For example, the position parameter of the camera at the terminal device may include orientation information, which is the upper left corner of the camera at the terminal device. For another example, the position parameters of the camera at the terminal device may include an azimuth parameter and size information, where the azimuth information is that the camera is at the upper left corner of the terminal device, and the size information is that the distance between the camera and the mobile phone frame is 5mm.
Further, the autopilot domain controller may select a minimum value in the second clamp height range as the target clamp height. Then, the autopilot domain controller may send a first instruction to the smartphone mount to cause the smartphone mount to adjust to the target grip height. And the autopilot domain controller may send a second instruction to the terminal device, so that the terminal device cards the video recording function of the camera thereof and records the video of the third vehicle, and stops recording until the third vehicle does not exist in the view range of the camera of the terminal device.
Alternatively, if the autopilot controller determines that a third vehicle is present, the autopilot controller may obtain a first height of a hood of the first vehicle relative to a chassis of the first vehicle and a second height of a camera device on the first vehicle relative to the chassis of the first vehicle. Here, it is preferable that the first height of the hood of the first vehicle with respect to the chassis of the first vehicle means a maximum height of the hood of the first vehicle from the chassis of the first vehicle. The autopilot domain controller may then determine a target height of the smartphone mount for placement of the terminal device relative to the chassis of the first vehicle based on the first height and the second height. Further, the autopilot domain controller may adjust the smartphone mount to a target height. Then, the autopilot controller may activate a video recording function of a camera of the terminal device to photograph the third vehicle. Here, the target height of the smart phone support with respect to the chassis of the first vehicle may be greater than the first height or may be less than the first height.
For example, referring to fig. 2, fig. 2 is a schematic diagram illustrating determining a target height of a smart phone stand according to an embodiment of the present application. As shown in fig. 2, it is assumed here that the first height of the hood of the first vehicle relative to the chassis of the first vehicle is h1 and the second height of the camera of the terminal device on the first vehicle relative to the chassis of the first vehicle is h2. The autopilot domain controller may determine that the target height of the smartphone mount for placing the terminal device with respect to the chassis of the first vehicle is h0 according to the first height h1 and the second height h2. Then, the autopilot domain controller may adjust the smartphone mount to a target height h0. Further, the autopilot controller may activate a video recording function of a camera of the terminal device to photograph the third vehicle.
S106, if the fact that the intelligent mobile phone support does not have terminal equipment is determined, reminding information is generated and output; determining whether terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is maintained.
In some possible embodiments, the autopilot domain controller may generate and output the alert message if it determines that no terminal device is present on the smartphone mount. After the autopilot domain controller generates and outputs the alert message, the autopilot domain controller may again determine whether there is a terminal device on the smart phone support. After determining whether the terminal equipment exists on the smart phone support again, the autopilot domain controller keeps the current system state if the terminal equipment does not exist on the smart phone support. The reminding information is used for reminding the user to place the terminal equipment on the smart phone support.
Optionally, if the autopilot domain controller determines that the terminal device does not exist on the smart phone support, the autopilot domain controller may control a man-machine interaction interface on the vehicle to generate and output the reminding information. For example, if the autopilot domain controller determines that the terminal device does not exist on the smart phone support, the autopilot domain controller may control the man-machine interaction interface on the vehicle to generate and output a word "please place the terminal device to the smart phone support" to remind the user to place the terminal device on the smart phone support.
In the above implementation, the autopilot controller may determine whether a third vehicle matching the reference feature information exists around the first vehicle according to the reference vehicle feature information of the reference vehicle provided by the user and the current environmental image of the first vehicle. Further, the autopilot domain controller may control a terminal device on the first vehicle to photograph the third vehicle. The automatic driving domain controller can identify and control the terminal device to shoot the third vehicle based on the requirement of the user in the driving mode, so that the driving experience and the safety of the user can be improved. In some possible embodiments, the autopilot controller may implement synchronized video recording with a camera of the terminal device for a third vehicle via an onboard camera on the first vehicle. Furthermore, the autopilot domain controller may use the timestamp as a mark to obtain the target video content for the third vehicle according to the video content of the vehicle-mounted camera for the third vehicle and the video content of the camera of the terminal device.
Optionally, the autopilot domain controller may obtain the current environmental video of the first vehicle through the body domain controller. Furthermore, the autopilot domain controller may splice a shot video obtained by shooting the third vehicle through the terminal device with a current environmental video of the first vehicle according to a time stamp of the shot video, so as to obtain videos shot together under the same time stamp.
Illustratively, the autopilot domain controller may obtain a current environmental video of the first vehicle via the body domain controller assuming that the environmental video records from 10 hours 15 minutes 00 seconds to 10 hours 16 minutes 00 seconds of environmental video. Assume that the shot video of the third vehicle by the autopilot controller is recorded with 10 hours 15 minutes 15 seconds to 10 hours 15 minutes 30 seconds. Further, the autopilot domain controller can obtain a shot video recorded with 10 hours 15 minutes 05 seconds to 10 hours 15 minutes 30 seconds according to the time node of the shot video and by combining the environment video.
Alternatively, the autopilot domain controller may acquire the current environmental image of the first vehicle via the body domain controller. Then, the autopilot domain controller may acquire an image corresponding to the timestamp in the environmental image based on the timestamp of the photographed image obtained by photographing the third vehicle. Further, the autopilot domain controller can acquire the shooting image with better image quality from the shooting image with the timestamp and the environment image through technologies such as image enhancement.
In the implementation, the autopilot domain controller can process the shooting image shot by the third vehicle and the current environment image of the first vehicle acquired by the vehicle body domain controller, so that the shooting image with more complete shooting record and better image quality of the third vehicle can be obtained.
Referring to fig. 3, fig. 3 is a schematic diagram of a shooting control device based on demand recognition in a driving mode according to an embodiment of the application. As shown in fig. 3, the photographing control apparatus based on the demand recognition in the driving mode may include: an acquisition unit 31, a processing unit 32, and a determination unit 33.
In a specific implementation, the acquiring unit 31 is configured to acquire vehicle feature information of a reference vehicle provided by a user if it is detected that the terminal device is in a charging state and a driving mode of the terminal device is in an on state. An acquiring unit 31 is configured to acquire a current environmental image of the first vehicle acquired by a body domain controller on the first vehicle through an on-board camera. The processing unit 32 is configured to extract vehicle feature information of one or more second vehicles if it is determined that one or more second vehicles exist around the first vehicle according to the environmental image, and determine whether a third vehicle matching the reference vehicle feature information exists in the one or more second vehicles according to the vehicle feature information of the reference vehicle and the vehicle feature information of the one or more second vehicles. And the determining unit 33 is configured to determine whether the terminal device exists on the smart phone support if it is determined that the third vehicle exists. And the processing unit 32 is configured to determine, if it is determined that the terminal device exists on the smart phone support, a target clamping height corresponding to the smart phone support, and send a first instruction to the smart phone support, so that the smart phone support is adjusted to the target clamping height, and send a second instruction to the terminal device, so that the terminal device starts a video recording function of the camera and records a video of the third vehicle, and stop recording until the third vehicle does not exist in a view finding range of the camera of the terminal device, where the first instruction includes the target clamping height, and when the clamping height of the smart phone support is the target clamping height, an original view finding range of the camera of the terminal device includes the third vehicle. The processing unit 32 is configured to generate and output a reminder if it is determined that the terminal device does not exist on the smart phone support; determining whether terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is maintained, wherein the reminding information is used for reminding a user to place the terminal equipment on the smart phone support.
In an alternative embodiment, the obtaining unit 31 is configured to obtain the current clamping height of the smart phone support and the posture of the terminal device. A determining unit 33, configured to determine whether the terminal device is inverted according to the posture of the terminal device. And the processing unit 32 is configured to determine the size of the terminal device and the position parameter of the camera of the terminal device on the terminal device according to the preset terminal device model if the terminal device is determined not to be inverted. The processing unit 32 is configured to determine a target clamping height corresponding to the smart phone support according to a current clamping height of the smart phone support, a size of the terminal device, a position parameter of the camera at the terminal device, a geometric model feature parameter between a pre-stored smart phone support base and an engine cover of the first vehicle, and a first relative position between the third vehicle and the smart phone support base.
In an alternative implementation manner, please refer to fig. 4, fig. 4 is a schematic diagram illustrating a further structure of a shooting control apparatus based on demand recognition in a driving mode according to an embodiment of the present application. As shown in fig. 4, the photographing control apparatus based on the demand recognition in the driving mode may further include a selection unit 34. The processing unit 32 is configured to determine a second relative position between the camera of the terminal device and the base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal device, and the position parameter of the camera of the terminal device on the terminal device. The processing unit 32 is configured to determine that an original view-finding range of the camera of the terminal device includes a first clamping height range of the third vehicle and a view-finding range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smart phone support, the stroke of the smart phone support, and the shooting parameters of the camera of the terminal device. The processing unit 32 is configured to determine a second clamping height range in which the actual framing effect is not blocked by the hood of the first vehicle according to the prestored geometric model feature parameter between the smart phone support base and the hood of the first vehicle and the framing range corresponding to the first clamping height range. And a selecting unit 34, configured to select the minimum value in the second clamping height range as the target clamping height.
In an alternative embodiment, the processing unit 32 is configured to determine the set of relationships between the clamping height of the smart phone holder and the viewing range of the camera of the terminal device according to the second relative position, the stroke of the smart phone holder, and the shooting parameters of the camera of the terminal device. The processing unit 32 is configured to determine, according to the relation set and the first relative position, that the original view range of the camera of the terminal device includes the first clamping height range of the third vehicle and the view range corresponding to the first clamping height range.
In an alternative embodiment, the obtaining unit 31 is configured to obtain the number of reference vehicles provided by the user. The processing unit 32 is configured to acquire a vehicle brand and a vehicle contour of a reference vehicle and determine the vehicle brand and the vehicle contour of the reference vehicle as reference vehicle feature information of the reference vehicle if it is determined that the number of reference vehicles provided by the user is 1.
In an alternative embodiment, the processing unit 32 is configured to obtain a vehicle brand of each reference vehicle in the plurality of reference vehicles if it is determined that the number of reference vehicles provided by the user is greater than 1. A determining unit 33 for determining a target vehicle brand based on a vehicle brand of each of a plurality of reference vehicles, wherein a duty ratio of the plurality of reference vehicles to the reference vehicle belonging to the target vehicle brand is equal to or greater than a preset duty ratio. The processing unit 32 is configured to acquire a vehicle profile of each reference vehicle of the plurality of reference vehicles, and determine a target vehicle profile corresponding to the plurality of reference vehicles according to the vehicle profile of each reference vehicle. A determining unit 33 for determining reference vehicle characteristic information according to the target vehicle brand and the target vehicle contour.
In an alternative embodiment, the processing unit 32 is configured to divide the plurality of reference vehicles into N1 first reference vehicle sets according to vehicle brands of the reference vehicles, where the vehicle brands corresponding to each first reference vehicle set in the N1 first reference vehicle sets are different, the reference vehicle contained in each first reference vehicle set is the same as the vehicle brand corresponding to each first reference vehicle set, and N1 is a positive integer equal to or greater than 1. A determining unit 33, configured to determine N2 target reference vehicle sets from N1 first reference vehicle sets according to the number of reference vehicles included in each first reference vehicle set, where a duty ratio of reference vehicles included in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset duty ratio, and N2 is a positive integer equal to or greater than 1. A determining unit 33, configured to determine a vehicle brand corresponding to each target reference vehicle set of the N2 target reference vehicle sets as a target vehicle brand.
In an alternative embodiment, the processing unit 32 is configured to determine whether the similarity of the vehicle profiles of the two reference vehicles is greater than a first preset similarity if it is determined that the plurality of reference vehicles are two reference vehicles. A determining unit 33, configured to determine the vehicle contour of any one of the two reference vehicles as the target vehicle contour if it is determined that the similarity of the vehicle contours of the two reference vehicles is equal to or greater than a first preset similarity. The determining unit 33 determines, if it is determined that the similarity of the vehicle profiles of the two reference vehicles is smaller than the first preset similarity, both of the vehicle profiles of the two reference vehicles as the target vehicle profile.
In an alternative embodiment, the processing unit 32 is configured to determine whether the similarity of the vehicle profiles of any two reference vehicles of the three or more reference vehicles is less than the first preset similarity if it is determined that the plurality of reference vehicles are three or more reference vehicles. And a determining unit 33 configured to determine the vehicle profiles of any two of the three or more reference vehicles as the target vehicle profile if it is determined that the similarity of the vehicle profiles of any two of the three or more reference vehicles is smaller than the first preset similarity. And a determining unit 33, configured to determine one or more second reference vehicle sets from the three or more reference vehicles according to the vehicle profiles of the three or more reference vehicles if it is determined that the similarity of the vehicle profiles of any two reference vehicles in the three or more reference vehicles is not less than the first preset similarity, where the similarity of the vehicle profiles of any two reference vehicles in at least two reference vehicles included in the one or more second reference vehicle sets is equal to or greater than the first preset similarity. A determining unit 33, configured to determine a vehicle contour corresponding to each second reference vehicle set according to the vehicle contour of the reference vehicle included in each second reference vehicle set in the one or more second reference vehicle sets, and determine the vehicle contour corresponding to each second reference vehicle set as the target vehicle contour.
In an alternative embodiment, the processing unit 32 is configured to perform the following determination operations on any one of the one or more second vehicles. An acquisition unit 31 for acquiring a vehicle brand and a vehicle profile of any one of the second vehicles. And a processing unit 32 for determining whether the vehicle brand and the vehicle contour of any of the second vehicles match the reference vehicle characteristic information. And a determining unit 33 configured to determine any one of the second vehicles as the third vehicle if it is determined that the vehicle brand and the vehicle contour of any one of the second vehicles match the reference vehicle characteristic information. And a processing unit configured to perform the determination operation on the one or more second vehicles, respectively, to determine whether a third vehicle that matches the reference vehicle characteristic information exists in the one or more second vehicles.
In an alternative embodiment, the determining unit 33 is configured to determine that the vehicle brand and the vehicle contour of any second vehicle match the reference vehicle feature information if it is determined that the vehicle brand of any second vehicle is the same as the target vehicle brand, and it is determined that the similarity between the vehicle contour of any second vehicle and the target vehicle contour is equal to or greater than a second preset similarity.
In an alternative embodiment, the processing unit 32 is configured to implement, for the third vehicle, a synchronized video recording with the camera of the terminal device by using the onboard camera of the first vehicle. The processing unit 32 is configured to obtain the target video content for the third vehicle according to the video content of the vehicle-mounted camera for the third vehicle and the video content of the camera of the terminal device by using the timestamp as a mark.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may be an autopilot controller in the above embodiment, and may be configured to implement the steps of the demand-based recognition photographing control method in the driving mode executed by the autopilot controller described in the above embodiment. The electronic device may include: a processor 51, a memory 52 and a bus system 53.
Memory 52 includes, but is not limited to, RAM, ROM, EPROM or CD-ROM, which memory 52 is used to store relevant instructions and data. The memory 52 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof:
operation instructions: including various operational instructions for carrying out various operations.
Operating system: including various system programs for implementing various basic services and handling hardware-based tasks.
Only one memory is shown in fig. 5, but a plurality of memories may be provided as needed.
As shown in fig. 5, the electronic device may further include an input-output device 54, and the input-output device 54 may be a communication module or a transceiver circuit. In the embodiment of the present application, the input-output device 54 is used to perform the transmission and reception process of data or signaling such as the reference vehicle characteristic information and the like referred to in the embodiment.
The processor 51 may be a controller, CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with the disclosure of embodiments of the application. The processor 51 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
In particular applications, the various components of the electronic device are coupled together by a bus system 53, where the bus system 53 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 53 in fig. 5. For ease of illustration, only schematic illustrations are shown in fig. 5.
It should be noted that, in practical applications, the processor in the embodiment of the present application may be an integrated circuit chip, which has signal processing capability. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a digital signal processor (digital signal Processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (fieldprogrammable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed.
It will be appreciated that the memory in embodiments of the application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a programmable read-only memory (programmableROM, PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (electricallyEPROM, EEPROM), or a flash memory, among others. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (enhancedSDRAM, ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memory described by embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a computer, implements the method or steps performed by the autopilot domain controller in the above embodiment.
Embodiments of the present application also provide a computer program product which, when executed by a computer, implements the method or steps performed by the autopilot controller in the above embodiments.
It should be noted that, for simplicity of description, the embodiment of any of the foregoing shooting methods is described as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the described action sequences, since some steps may be performed in other order or simultaneously according to the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required for the present application.
Although the application is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Those skilled in the art will appreciate that all or part of the steps in the various methods of the method embodiments of any of the shooting methods described above may be accomplished by a program that instructs associated hardware, the program may be stored in a computer readable memory, and the memory may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing describes embodiments of the present application in detail, and specific examples are applied to illustrate principles and embodiments of a shooting control method and apparatus based on demand recognition in a driving mode of the present application, where the illustration of the foregoing embodiments indicates a method and core idea for helping to understand the present application; meanwhile, according to the idea of the shooting control method and device based on demand recognition in the driving mode of the present application, the specific embodiments and application ranges are changed, and the present application is not limited to the above description.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present application in further detail, and are not to be construed as limiting the scope of the application, but are merely intended to cover any modifications, equivalents, improvements, etc. based on the teachings of the application.
Claims (10)
1. The shooting control method based on demand recognition in a driving mode is characterized by being applied to an automatic driving domain controller on a first vehicle, wherein a smart phone support and terminal equipment are further arranged on the first vehicle, and the automatic driving domain controller is in communication connection with the smart phone support and the terminal equipment, and the method comprises the following steps:
if the terminal equipment is detected to be in a charging state and the driving mode of the terminal equipment is detected to be in an opening state, acquiring reference vehicle characteristic information of a reference vehicle provided by a user;
acquiring a current environment image of the first vehicle, which is acquired by a vehicle body domain controller on the first vehicle through a vehicle-mounted camera;
if one or more second vehicles exist around the first vehicle according to the environment image, extracting vehicle characteristic information of the one or more second vehicles, and determining whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles according to the reference vehicle characteristic information and the vehicle characteristic information of the one or more second vehicles;
If the third vehicle is determined to exist, determining whether the terminal equipment exists on the smart phone support;
if the terminal equipment is determined to exist on the smart phone support, determining a target clamping height corresponding to the smart phone support, and sending a first instruction to the smart phone support so that the smart phone support can be adjusted to the target clamping height, wherein the first instruction comprises the target clamping height, and when the clamping height of the smart phone support is the target clamping height, the original view finding range of the terminal equipment comprises the third vehicle;
sending a second instruction to the terminal equipment, so that the terminal equipment starts the video recording function of the camera and records the video of the third vehicle until the third vehicle does not exist in the view finding range of the camera of the terminal equipment, and stopping recording;
if the terminal equipment does not exist on the smart phone support, generating and outputting reminding information; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is maintained, wherein the reminding information is used for reminding the user of placing the terminal equipment on the smart phone support.
2. The method of claim 1, wherein determining the target clamping height corresponding to the smartphone mount comprises:
acquiring the current clamping height of the smart phone support and the gesture of the terminal equipment;
determining whether the terminal equipment is inverted according to the gesture of the terminal equipment;
if the terminal equipment is determined not to be inverted, determining the size of the terminal equipment and the position parameters of a camera of the terminal equipment on the terminal equipment according to the preset terminal equipment model;
and determining the target clamping height corresponding to the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment, the position parameter of the camera of the terminal equipment on the terminal equipment, the geometric model characteristic parameter between the pre-stored smart phone support base and the engine cover of the first vehicle, and the first relative position of the third vehicle and the smart phone support base.
3. The method according to claim 2, wherein the determining the target clamping height corresponding to the smart phone holder according to the current clamping height of the smart phone holder, the size of the terminal device, the position parameter of the camera of the terminal device on the terminal device, the pre-stored geometric model feature parameter between the smart phone holder base and the hood of the first vehicle, and the first relative position of the third vehicle and the smart phone holder base includes:
Determining a second relative position of a camera of the terminal equipment and a base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment and the position parameters of the camera of the terminal equipment on the terminal equipment;
determining that an original view finding range of a camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view finding range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smart phone support, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment;
determining a second clamping height range in which an actual framing effect is not blocked by the engine cover of the first vehicle according to a prestored geometric model characteristic parameter between the intelligent mobile phone support base and the engine cover of the first vehicle and a framing range corresponding to the first clamping height range;
and selecting the minimum value in the second clamping height range as a target clamping height.
4. The method according to claim 3, wherein determining that the original view range of the camera of the terminal device includes the first clamping height range of the third vehicle and the view range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smart phone holder, the stroke of the smart phone holder, and the photographing parameters of the camera of the terminal device includes:
Determining a relation set between the clamping height of the smart phone support and the view finding range of the camera of the terminal equipment according to the second relative position, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment;
and determining that the original view finding range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view finding range corresponding to the first clamping height range according to the relation set and the first relative position.
5. The method of any one of claims 1-4, wherein the obtaining the reference vehicle characteristic information of the reference vehicle provided by the user comprises:
acquiring the number of reference vehicles provided by a user;
and if the number of the reference vehicles provided by the user is 1, acquiring the vehicle brands and the vehicle outlines of the reference vehicles, and determining the vehicle brands and the vehicle outlines of the reference vehicles as the reference vehicle characteristic information of the reference vehicles.
6. The method of claim 5, wherein the method further comprises:
if the number of the reference vehicles provided by the user is determined to be more than 1, acquiring the vehicle brand of each reference vehicle in a plurality of reference vehicles;
Determining a target vehicle brand according to the vehicle brand of each reference vehicle in the plurality of reference vehicles, wherein the duty ratio of the reference vehicles belonging to the target vehicle brand in the plurality of reference vehicles is equal to or larger than a preset duty ratio;
acquiring the vehicle contour of each reference vehicle in the plurality of reference vehicles, and determining a target vehicle contour corresponding to the plurality of reference vehicles according to the vehicle contour of each reference vehicle;
and determining the reference vehicle characteristic information of the reference vehicle according to the brand of the target vehicle and the outline of the target vehicle.
7. The method of claim 6, wherein the determining the target vehicle brand from the vehicle brands of each of the plurality of reference vehicles comprises:
dividing the plurality of reference vehicles into N1 first reference vehicle sets according to the vehicle brands of the reference vehicles, wherein the vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, the reference vehicles contained in the first reference vehicle sets are the same as the vehicle brands corresponding to the first reference vehicle sets, and N1 is a positive integer equal to or greater than 1;
Determining N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of the reference vehicles contained in each first reference vehicle set, wherein the duty ratio of the reference vehicles contained in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset duty ratio, and N2 is a positive integer equal to or greater than 1;
and determining the vehicle brand corresponding to each target reference vehicle set in the N2 target reference vehicle sets as a target vehicle brand.
8. A photographing control apparatus based on demand recognition in a driving mode, the photographing control apparatus comprising:
the acquisition unit is used for acquiring reference vehicle characteristic information of a reference vehicle provided by a user if the terminal equipment is detected to be in a charging state and the driving mode of the terminal equipment is detected to be in an opening state;
the acquisition unit is used for acquiring a current environment image of the first vehicle, which is acquired by a vehicle body domain controller on the first vehicle through a vehicle-mounted camera;
a processing unit, configured to extract vehicle feature information of one or more second vehicles if it is determined that one or more second vehicles exist around the first vehicle according to the environmental image, and determine whether a third vehicle matching the reference vehicle feature information exists in the one or more second vehicles according to the reference vehicle feature information and the vehicle feature information of the one or more second vehicles;
The determining unit is used for determining whether the terminal equipment exists on the smart phone support or not if the third vehicle exists;
the processing unit is configured to determine a target clamping height corresponding to the smart phone support if the terminal device is determined to exist on the smart phone support, send a first instruction to the smart phone support to enable the smart phone support to be adjusted to the target clamping height, and send a second instruction to the terminal device to enable the terminal device to start a video recording function of a camera and record the video of the third vehicle until the third vehicle does not exist in a view finding range of the camera of the terminal device, wherein the first instruction includes the target clamping height, and when the clamping height of the smart phone support is the target clamping height, an original view finding range of the camera of the terminal device includes the third vehicle;
the processing unit is used for generating and outputting reminding information if the terminal equipment does not exist on the smart phone support; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is maintained, wherein the reminding information is used for reminding the user of placing the terminal equipment on the smart phone support.
9. A computer readable storage medium for storing a computer program which, when executed by a processor, implements the steps of the method of any one of claims 1 to 7.
10. An electronic device comprising a memory storing a computer program and a processor implementing the steps of the method of any one of claims 1 to 7 when the computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211720089.XA CN115841763B (en) | 2022-12-30 | 2022-12-30 | Shooting control method and device based on demand identification in driving mode |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211720089.XA CN115841763B (en) | 2022-12-30 | 2022-12-30 | Shooting control method and device based on demand identification in driving mode |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115841763A CN115841763A (en) | 2023-03-24 |
CN115841763B true CN115841763B (en) | 2023-10-27 |
Family
ID=85577652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211720089.XA Active CN115841763B (en) | 2022-12-30 | 2022-12-30 | Shooting control method and device based on demand identification in driving mode |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115841763B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001257920A (en) * | 2000-03-13 | 2001-09-21 | Fuji Photo Film Co Ltd | Camera system |
CN108024049A (en) * | 2016-10-31 | 2018-05-11 | 惠州华阳通用电子有限公司 | A kind of vehicle-mounted shooting device towards control method and device |
CN111277755A (en) * | 2020-02-12 | 2020-06-12 | 广州小鹏汽车科技有限公司 | Photographing control method and system and vehicle |
CN214929500U (en) * | 2021-04-20 | 2021-11-30 | 北京汽车集团越野车有限公司 | Automobile with a detachable front cover |
CN114760417A (en) * | 2022-04-25 | 2022-07-15 | 北京地平线信息技术有限公司 | Image shooting method and device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10531048B2 (en) * | 2016-12-15 | 2020-01-07 | Motorola Solutions, Inc. | System and method for identifying a person, object, or entity (POE) of interest outside of a moving vehicle |
-
2022
- 2022-12-30 CN CN202211720089.XA patent/CN115841763B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001257920A (en) * | 2000-03-13 | 2001-09-21 | Fuji Photo Film Co Ltd | Camera system |
CN108024049A (en) * | 2016-10-31 | 2018-05-11 | 惠州华阳通用电子有限公司 | A kind of vehicle-mounted shooting device towards control method and device |
CN111277755A (en) * | 2020-02-12 | 2020-06-12 | 广州小鹏汽车科技有限公司 | Photographing control method and system and vehicle |
CN214929500U (en) * | 2021-04-20 | 2021-11-30 | 北京汽车集团越野车有限公司 | Automobile with a detachable front cover |
CN114760417A (en) * | 2022-04-25 | 2022-07-15 | 北京地平线信息技术有限公司 | Image shooting method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115841763A (en) | 2023-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108665373B (en) | Interactive processing method and device for vehicle loss assessment, processing equipment and client | |
US11126882B2 (en) | Method and device for license plate positioning | |
US9100630B2 (en) | Object detection metadata | |
EP4036759A1 (en) | Pose determination method, apparatus and system | |
US20150055879A1 (en) | Method, Server and System for Setting Background Image | |
CN109951627A (en) | Image processing method, device, storage medium and electronic equipment | |
WO2022042573A1 (en) | Application control method and apparatus, electronic device, and readable storage medium | |
KR102241906B1 (en) | System and method for guiding parking location of a vehicle | |
CN112363767A (en) | Vehicle-mounted camera calling method and device | |
CN105139033A (en) | Classifier construction method and device and image processing method and device | |
CN110941992A (en) | Smile expression detection method and device, computer equipment and storage medium | |
CN114416905A (en) | Article searching method, label generating method and device | |
CN115841763B (en) | Shooting control method and device based on demand identification in driving mode | |
CN107783710A (en) | Camera control method and device, terminal and readable storage medium storing program for executing | |
US20220027627A1 (en) | Picture annotation method, apparatus, processing device, and system | |
CN114906049A (en) | Automobile rear row reading lamp control method and device and automobile | |
CN111432277A (en) | Video playing method, electronic equipment and computer readable storage medium | |
CN115171678A (en) | Voice recognition method, device, electronic equipment, storage medium and product | |
EP3461138A1 (en) | Processing method and terminal | |
CN110941975A (en) | Image acquisition method, angle adjustment device and driving system | |
CN111667602A (en) | Image sharing method and system for automobile data recorder | |
CN115988308A (en) | Vehicle-mounted recommendation device, cloud, image recommendation method and computer program product | |
EP3089100A1 (en) | Method, apparatus, and system for displaying use records | |
CN111063063B (en) | Shared vehicle unlocking method and device, computer equipment and storage medium | |
CN112232385A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |