CN116132787A - Vehicle shooting method, device, computer readable medium and electronic equipment - Google Patents
Vehicle shooting method, device, computer readable medium and electronic equipment Download PDFInfo
- Publication number
- CN116132787A CN116132787A CN202310140921.7A CN202310140921A CN116132787A CN 116132787 A CN116132787 A CN 116132787A CN 202310140921 A CN202310140921 A CN 202310140921A CN 116132787 A CN116132787 A CN 116132787A
- Authority
- CN
- China
- Prior art keywords
- image data
- vehicle
- person
- shooting
- scenery
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Studio Devices (AREA)
Abstract
The disclosure relates to a vehicle shooting method, a vehicle shooting device, a computer readable medium and electronic equipment. The method comprises the following steps: if the head of any person in the vehicle is detected to turn out of the window during the running process of the vehicle, a first shooting component arranged in the vehicle is controlled to identify the sight direction of the person; a second photographing module disposed outside the vehicle is controlled to collect first image data corresponding to the line of sight direction. The scenery of the sight direction of people is tracked by utilizing the shooting assembly inside and outside the vehicle, scenery in the journey is recorded at a first visual angle, the scenery which is evanescent is captured, the strengthening and the extension of the human eye capability are realized by the shooting assembly outside the vehicle, and the user experience is improved. Because the outside shooting component shoots that the target follows the switching of the sight direction of the person in the car, the recorded scenery has more exclusive sense. In addition, the outside shooting component can track the scenery of any person on the vehicle in the sight direction, the captured image comes from all directions, and the defect that the other side scenery is missed due to the limitation of the vision of sitting at the fixed position is overcome.
Description
Technical Field
The disclosure relates to the technical field of vehicles, and in particular relates to a vehicle shooting method, a vehicle shooting device, a computer readable medium and electronic equipment.
Background
When the self-driving tour is extremely hot and a beautiful landscape is encountered in the driving, the vehicle is likely to stop at any time due to road condition factors, and the vehicle just wants to draw out a mobile phone to shoot or share with a partner beside the body, so that the beautiful landscape is already in a shake. Thus, the user cannot record all scenes in the journey.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a vehicle photographing method, including:
if the head of any person in the vehicle is detected to turn out of the window during the running process of the vehicle, a first shooting component arranged in the vehicle is controlled to identify the sight direction of the person;
and controlling a second shooting assembly arranged outside the vehicle to acquire first image data corresponding to the sight line direction.
In a second aspect, the present disclosure provides a vehicle photographing apparatus, comprising:
the first control module is used for controlling a first shooting component arranged in the vehicle to recognize the sight line direction of any person in the vehicle if the head of the person is detected to turn out of the window in the running process of the vehicle;
and the second control module is used for controlling a second shooting assembly arranged outside the vehicle to acquire the first image data corresponding to the sight line direction.
In a third aspect, the present disclosure provides a computer-readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the vehicle photographing method provided in the first aspect of the present disclosure.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to implement the steps of the vehicle photographing method provided in the first aspect of the present disclosure.
In the technical scheme, the scenery of the sight direction of the person is tracked by utilizing the shooting assembly inside and outside the vehicle, the scenery in the journey is recorded at the first visual angle, the scenery which is about to pass through can be captured, the strengthening and the extension of the human eye capability are realized by the shooting assembly outside the vehicle, and the user experience is improved. Because the shooting target of the shooting component outside the automobile is switched along with the direction of the sight of the person in the automobile, compared with the common video, the recorded scenery has more special sense, and the curiosity of the observer is easier to be induced, so that the observer wants to see the world through the visual angle of other people. In addition, the outside shooting component can track the scenery of any person on the vehicle in the sight direction, the captured image comes from all directions, and the defect that the other side scenery is missed due to the limitation of the vision of sitting at the fixed position is overcome. In addition, once the head of any person in the vehicle turns to the outside of the window, the outside shooting assembly is controlled to capture the scenery of the sight direction of the person, the real-time performance is good, and the scenery which is in the evanescent state can be captured in time.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
fig. 1 is a flowchart illustrating a vehicle photographing method according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating a vehicle photographing process according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a vehicle photographing method according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating a vehicle photographing method according to another exemplary embodiment.
Fig. 5 is a flowchart illustrating a vehicle photographing method according to another exemplary embodiment.
Fig. 6 is a block diagram of a vehicle camera according to an exemplary embodiment.
Fig. 7 is a schematic diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
Meanwhile, it can be understood that the data (including but not limited to the data itself, the acquisition or the use of the data) related to the technical scheme should conform to the requirements of the corresponding laws and regulations and related regulations.
Fig. 1 is a flowchart illustrating a vehicle photographing method according to an exemplary embodiment. As shown in fig. 1, the method may include the following S101 and S102.
In S101, if it is detected that the head of any person in the vehicle is turned out of the window during running of the vehicle, the first photographing module provided in the vehicle is controlled to recognize the line of sight direction of the person.
In the present disclosure, a vehicle includes a first photographing assembly disposed inside the vehicle and a second photographing assembly disposed outside the vehicle, wherein the first photographing assembly may include one or more cameras disposed inside the vehicle, and the second photographing assembly may include one or more cameras disposed outside the vehicle. Illustratively, the first camera assembly includes a rotating camera (e.g., 360 ° rotating camera) disposed adjacent the front windshield (e.g., on the rear view mirror), and the second camera assembly includes a rotating camera (e.g., 360 ° rotating camera) disposed on the roof of the vehicle.
In the running process of the vehicle, head images of all persons in the vehicle can be captured in real time through the first shooting assembly, and whether the heads of the corresponding persons turn out of the vehicle window is determined through an image recognition technology. And if the head of any person in the vehicle is detected to turn out of the window, controlling the first shooting assembly to identify the sight direction of the person.
Specifically, the direction of the line of sight of the person can be determined by identifying the direction of head rotation of the person outside the head-turning vehicle window, or the pupil position of the person, by the first photographing assembly.
In S102, a second photographing module provided outside the vehicle is controlled to collect first image data corresponding to the line of sight direction.
Specifically, the second photographing assembly may be controlled to rotate to the line-of-sight direction recognized by the first photographing assembly to collect the first image data. Or controlling a camera in the second shooting assembly, wherein the shooting range of the camera comprises the sight line direction, to acquire the first image data. The first image data may be still image data (e.g., a photograph) or moving image data (e.g., a moving picture, a video, etc.).
For example, as shown in fig. 2, during the running of the vehicle, it is detected that the head of the assistant driver turns out of the right window, at this time, the first photographing module is controlled to recognize the sight line direction of the assistant driver, and then the second photographing module provided on the roof is controlled to photograph the first image data corresponding to the sight line direction of the assistant driver.
In the technical scheme, the scenery of the sight direction of the person is tracked by utilizing the shooting assembly inside and outside the vehicle, the scenery in the journey is recorded at the first visual angle, the scenery which is about to pass through can be captured, the strengthening and the extension of the human eye capability are realized by the shooting assembly outside the vehicle, and the user experience is improved. Because the shooting target of the shooting component outside the automobile is switched along with the direction of the sight of the person in the automobile, compared with the common video, the recorded scenery has more special sense, and the curiosity of the observer is easier to be induced, so that the observer wants to see the world through the visual angle of other people. In addition, the outside shooting component can track the scenery of any person on the vehicle in the sight direction, the captured image comes from all directions, and the defect that the other side scenery is missed due to the limitation of the vision of sitting at the fixed position is overcome. In addition, once the head of any person in the vehicle turns to the outside of the window, the outside shooting assembly is controlled to capture the scenery of the sight direction of the person, the real-time performance is good, and the scenery which is in the evanescent state can be captured in time.
Fig. 3 is a flowchart illustrating a vehicle photographing method according to another exemplary embodiment. As shown in fig. 3, the above method may further include the following S103 and S104.
In S103, the first photographing assembly is synchronously controlled to collect the second image data of the person while the second photographing assembly is controlled to collect the first image data.
Thus, when the scenery outside the vehicle is recorded in the direction of the sight line of the person looking at the scenery, the emotion and the face state of the person looking at the scenery at the moment can be recorded synchronously.
In S104, third image data is generated from the first image data and the second image data.
In one embodiment, the first image data and the second image data may be matched and synthesized to obtain the third image data.
When the first image data and the second image data are both static images, the first image data and the second image data can be spliced to obtain a spliced image. When the first image data and the second image data are both dynamic images, the image frames with the same shooting time in the first image data and the second image data can be spliced to obtain a synthesized dynamic image, namely third image data.
Thus, the image data of the scenery outside the vehicle and the image data of the person looking at the scenery in the vehicle at the moment can be matched and synthesized, and missing of a scenery group due to 'putting posture' and 'adjusting a good viewing angle' is avoided.
In another embodiment, a digital person corresponding to the person may be generated first according to the second image data; and then, splicing and synthesizing the digital person corresponding to the person and the first image data to obtain third image data.
Among them, digital Human (Digital Human/Meta Human) is a Digital character image created by Digital technology and close to a Human image. Specifically, facial features, motion features, and the like of the person may be extracted from the second image data, and a digital person corresponding to the person may be generated from these features.
The digital person functions are more fit with the young person's face-free and attractive', and missing of a beautiful scene group photo due to 'pose' and 'adjustment of a attractive angle' can be avoided.
Fig. 4 is a flowchart illustrating a vehicle photographing method according to another exemplary embodiment. As shown in fig. 4, the above method may further include the following S105.
In S105, the first image data is pushed to the target terminal.
The target terminal comprises an on-board display of the vehicle and/or a terminal of at least one person in the vehicle.
The first image data is pushed to the in-vehicle display and/or to the terminal of at least one person in the vehicle to display the first image data via the in-vehicle display or the user terminal so that the peer partner can view the scenery seen from the other person's perspective. In addition, the first image data is pushed to the personnel terminal in the vehicle, so that the user can conveniently store the first image data, and the user can conveniently share the own trip news on the social platform.
In addition, in the running process of the vehicle, the personnel in the vehicle can generate exclusive digital persons by means of the shooting component in the vehicle, so that the user experience is further improved. Specifically, the method may further include the steps of:
in response to receiving the digital person generation request, determining a target person who issued the digital person generation request;
controlling the first shooting component to acquire fourth image data of the target person;
and generating a digital person corresponding to the target person according to the fourth image data.
Wherein the target person is any person in the vehicle. In-car personnel can trigger digital person generation instructions through voice instructions or user terminals. In addition, the target person who issues the digital person generation request may be determined based on characteristics of the voice instruction (e.g., voiceprint characteristics, sound intensity, etc.) or location information of the user terminal that triggered the instruction.
In addition, during driving, a partner in the vehicle is often unable to make a picture due to space restrictions in the vehicle and difficulty in finding a view. Therefore, the method can realize that the in-car partners can jointly form a group with the scenery outside the car through digital human technology and image stitching so as to solve the problem that the in-car partners cannot form a group due to space limitation and difficult view in the car during driving. Specifically, the method may further include the steps of:
determining a target image for group photo from the first image data;
and splicing and synthesizing the digital person corresponding to at least one person in the vehicle and the target image to obtain a group photo image.
After the first image data is acquired through the second shooting component, the first image data can be displayed through the user terminal or the vehicle-mounted display, so that a user can select a target image for group photo. After the target image is determined, the digital person corresponding to at least one person in the vehicle and the target image can be spliced and combined to obtain a group photo image. For example, digital people corresponding to all people in the vehicle and the target image can be spliced and combined to obtain a group photo image.
In one embodiment, an editing interface may be displayed on the user terminal or the in-vehicle display, so that the user may drag his or her digital person image onto the target image, and the user may select his or her favorite expression, gesture, dress up with a landscape, etc. through the editing interface.
In addition, the photographic image can be scored according to the scenery shooting quality of the photographic image, the digital human expression, the digital human action, the digital human imperative and the scenery collocation degree and the like, so that the travel interestingness is increased.
Fig. 5 is a flowchart illustrating a vehicle photographing method according to another exemplary embodiment. As shown in fig. 5, the above method may further include the following S106.
In S106, if there is history image data matching the first image data, the history image data is pushed.
In the present disclosure, the history image data is image data photographed by the second photographing component in the history trip. The second shooting component can store the image data shot in the history journey, so that in the current journey, if the first image data is acquired through the second shooting component, the first image data can be subjected to feature matching with the image data shot in the history journey to determine whether the history image data matched with the first image data exists. If so, pushing the matched historical image data. In this way, the vehicle passing by a scene with similar characteristics recognizes and pushes the previous travel record, and recall the user.
Fig. 6 is a block diagram of a vehicle camera according to an exemplary embodiment. As shown in fig. 6, the apparatus 200 may include:
a first control module 201, configured to control a first shooting assembly disposed in a vehicle to identify a line of sight direction of any person in the vehicle if it is detected that the head of the person is turned out of a window during running of the vehicle;
and a second control module 202 for controlling a second photographing assembly disposed outside the vehicle to collect first image data corresponding to the line of sight direction.
In the technical scheme, the scenery of the sight direction of the person is tracked by utilizing the shooting assembly inside and outside the vehicle, the scenery in the journey is recorded at the first visual angle, the scenery which is about to pass through can be captured, the strengthening and the extension of the human eye capability are realized by the shooting assembly outside the vehicle, and the user experience is improved. Because the shooting target of the shooting component outside the automobile is switched along with the direction of the sight of the person in the automobile, compared with the common video, the recorded scenery has more special sense, and the curiosity of the observer is easier to be induced, so that the observer wants to see the world through the visual angle of other people. In addition, the outside shooting component can track the scenery of any person on the vehicle in the sight direction, the captured image comes from all directions, and the defect that the other side scenery is missed due to the limitation of the vision of sitting at the fixed position is overcome. In addition, once the head of any person in the vehicle turns to the outside of the window, the outside shooting assembly is controlled to capture the scenery of the sight direction of the person, the real-time performance is good, and the scenery which is in the evanescent state can be captured in time.
Optionally, the first control module 201 is further configured to, when controlling the second shooting component to acquire the first image data, synchronously control the first shooting component to acquire the second image data of the person;
the apparatus 200 further comprises:
the first generation module is used for generating third image data according to the first image data and the second image data.
Optionally, the first generating module is configured to match and synthesize the first image data and the second image data to obtain third image data.
Optionally, the first generating module includes:
the generation sub-module is used for generating digital persons corresponding to the persons according to the second image data;
and the splicing and synthesizing sub-module is used for splicing and synthesizing the digital person corresponding to the person and the first image data to obtain third image data.
Optionally, the apparatus 200 further includes:
and the first pushing module is used for pushing the first image data to a target terminal, wherein the target terminal comprises a vehicle-mounted display of the vehicle and/or a terminal of at least one person in the vehicle.
Optionally, the apparatus 200 further includes:
the first determining module is used for determining a target person sending out the digital person generation request in response to receiving the digital person generation request, wherein the target person is any person in the vehicle;
the first control module 202 is further configured to control the first shooting component to acquire fourth image data of the target person;
the apparatus 200 further comprises:
and the second generation module is used for generating the digital person corresponding to the target person according to the fourth image data.
Optionally, the apparatus 200 further includes:
a second determining module, configured to determine a target image for group photo from the first image data;
and the splicing and synthesizing module is used for splicing and synthesizing at least one digital person corresponding to the person in the vehicle and the target image to obtain a group photo image.
Optionally, the apparatus 200 further includes:
and the second pushing module is used for pushing the historical image data if the historical image data matched with the first image data exists, wherein the historical image data is the image data shot by the second shooting assembly in a historical journey.
The present disclosure also provides a computer-readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the above-described vehicle photographing method provided by the present disclosure.
Referring now to fig. 7, a schematic diagram of a configuration of an electronic device (e.g., an in-vehicle terminal) 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 7 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the in-vehicle terminal may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: if the head of any person in the vehicle is detected to turn out of the window during the running process of the vehicle, a first shooting component arranged in the vehicle is controlled to identify the sight direction of the person; and controlling a second shooting assembly arranged outside the vehicle to acquire first image data corresponding to the sight line direction.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of the module is not limited to this module itself in some cases, and for example, the second control module may also be described as "a module that controls a second photographing component provided outside the vehicle to collect first image data corresponding to the line of sight".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, example 1 provides a vehicle photographing method, including: if the head of any person in the vehicle is detected to turn out of the window during the running process of the vehicle, a first shooting component arranged in the vehicle is controlled to identify the sight direction of the person; and controlling a second shooting assembly arranged outside the vehicle to acquire first image data corresponding to the sight line direction.
In accordance with one or more embodiments of the present disclosure, example 2 provides the method of example 1, the method further comprising: synchronously controlling the first shooting component to acquire second image data of the person when controlling the second shooting component to acquire the first image data; third image data is generated from the first image data and the second image data.
According to one or more embodiments of the present disclosure, example 3 provides the method of example 2, the generating third image data from the first image data and the second image data, comprising: and matching and synthesizing the first image data and the second image data to obtain third image data.
According to one or more embodiments of the present disclosure, example 4 provides the method of example 2, the generating third image data from the first image data and the second image data, comprising: generating a digital person corresponding to the person according to the second image data; and splicing and synthesizing the digital person corresponding to the person and the first image data to obtain third image data.
Example 5 provides the method of example 1, according to one or more embodiments of the present disclosure, the method further comprising: and pushing the first image data to a target terminal, wherein the target terminal comprises an on-board display of the vehicle and/or a terminal of at least one person in the vehicle.
Example 6 provides the method of example 1, according to one or more embodiments of the present disclosure, the method further comprising: in response to receiving a digital person generation request, determining a target person who sends the digital person generation request, wherein the target person is any person in a vehicle; controlling the first shooting component to acquire fourth image data of the target person; and generating the digital person corresponding to the target person according to the fourth image data.
Example 7 provides the method of example 6, according to one or more embodiments of the present disclosure, the method further comprising: determining a target image for group photo from the first image data; and splicing and synthesizing at least one digital person corresponding to the person in the vehicle and the target image to obtain a group photo image.
According to one or more embodiments of the present disclosure, example 8 provides the method of any one of examples 1-7, the method further comprising: and pushing the historical image data if the historical image data matched with the first image data exists, wherein the historical image data is the image data shot by the second shooting component in a historical journey.
According to one or more embodiments of the present disclosure, example 9 provides a vehicle photographing apparatus, including: the first control module is used for controlling a first shooting component arranged in the vehicle to recognize the sight line direction of any person in the vehicle if the head of the person is detected to turn out of the window in the running process of the vehicle; and the second control module is used for controlling a second shooting assembly arranged outside the vehicle to acquire the first image data corresponding to the sight line direction.
According to one or more embodiments of the present disclosure, example 10 provides a computer-readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the method of any of examples 1-8.
Example 11 provides an electronic device according to one or more embodiments of the present disclosure, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to implement the steps of the method of any one of examples 1-8.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Claims (11)
1. A vehicle photographing method, characterized by comprising:
if the head of any person in the vehicle is detected to turn out of the window during the running process of the vehicle, a first shooting component arranged in the vehicle is controlled to identify the sight direction of the person;
and controlling a second shooting assembly arranged outside the vehicle to acquire first image data corresponding to the sight line direction.
2. The method according to claim 1, wherein the method further comprises:
synchronously controlling the first shooting component to acquire second image data of the person when controlling the second shooting component to acquire the first image data;
third image data is generated from the first image data and the second image data.
3. The method of claim 2, wherein generating third image data from the first image data and the second image data comprises:
and matching and synthesizing the first image data and the second image data to obtain third image data.
4. The method of claim 2, wherein generating third image data from the first image data and the second image data comprises:
generating a digital person corresponding to the person according to the second image data;
and splicing and synthesizing the digital person corresponding to the person and the first image data to obtain third image data.
5. The method according to claim 1, wherein the method further comprises:
and pushing the first image data to a target terminal, wherein the target terminal comprises an on-board display of the vehicle and/or a terminal of at least one person in the vehicle.
6. The method according to claim 1, wherein the method further comprises:
in response to receiving a digital person generation request, determining a target person who sends the digital person generation request, wherein the target person is any person in a vehicle;
controlling the first shooting component to acquire fourth image data of the target person;
and generating the digital person corresponding to the target person according to the fourth image data.
7. The method of claim 6, wherein the method further comprises:
determining a target image for group photo from the first image data;
and splicing and synthesizing at least one digital person corresponding to the person in the vehicle and the target image to obtain a group photo image.
8. The method according to any one of claims 1-7, further comprising:
and pushing the historical image data if the historical image data matched with the first image data exists, wherein the historical image data is the image data shot by the second shooting component in a historical journey.
9. A vehicle photographing apparatus, comprising:
the first control module is used for controlling a first shooting component arranged in the vehicle to recognize the sight line direction of any person in the vehicle if the head of the person is detected to turn out of the window in the running process of the vehicle;
and the second control module is used for controlling a second shooting assembly arranged outside the vehicle to acquire the first image data corresponding to the sight line direction.
10. A computer readable medium on which a computer program is stored, characterized in that the program, when being executed by a processing device, carries out the steps of the method according to any one of claims 1-8.
11. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310140921.7A CN116132787A (en) | 2023-02-15 | 2023-02-15 | Vehicle shooting method, device, computer readable medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310140921.7A CN116132787A (en) | 2023-02-15 | 2023-02-15 | Vehicle shooting method, device, computer readable medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116132787A true CN116132787A (en) | 2023-05-16 |
Family
ID=86300966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310140921.7A Pending CN116132787A (en) | 2023-02-15 | 2023-02-15 | Vehicle shooting method, device, computer readable medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116132787A (en) |
-
2023
- 2023-02-15 CN CN202310140921.7A patent/CN116132787A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022166872A1 (en) | Special-effect display method and apparatus, and device and medium | |
CN110418064B (en) | Focusing method and device, electronic equipment and storage medium | |
US11165955B2 (en) | Album generation apparatus, album generation system, and album generation method | |
KR20190008610A (en) | Mobile terminal and Control Method for the Same | |
CN109660723B (en) | Panoramic shooting method and device | |
CN108449546B (en) | Photographing method and mobile terminal | |
CN114219883A (en) | Video special effect processing method and device, electronic equipment and program product | |
CN111565332A (en) | Video transmission method, electronic device, and computer-readable medium | |
CN114333404A (en) | Vehicle searching method and device for parking lot, vehicle and storage medium | |
WO2020137398A1 (en) | Operation control device, imaging device, and operation control method | |
CN116132787A (en) | Vehicle shooting method, device, computer readable medium and electronic equipment | |
JP7127636B2 (en) | Information processing device, information processing method and information processing program | |
JP2012124767A (en) | Imaging apparatus | |
CN113505674B (en) | Face image processing method and device, electronic equipment and storage medium | |
CN114070996B (en) | Star sky shooting method, star sky shooting device and storage medium | |
CN110941344B (en) | Method for obtaining gazing point data and related device | |
CN115914860A (en) | Shooting method and electronic equipment | |
KR20180035452A (en) | Infotainment System Mounted on Vehicle and Operation Method Of The System | |
CN113379624A (en) | Image generation method, training method, device and equipment of image generation model | |
CN113507559A (en) | Intelligent camera shooting method and system applied to vehicle and vehicle | |
CN106228825A (en) | Auxiliary driving method and device | |
CN112019744A (en) | Photographing method, device, equipment and medium | |
CN112913221A (en) | Image processing method, image processing device, traversing machine, image optimization system and storage medium | |
JP2021044779A (en) | Image display device, image display method, and image display system | |
CN107305133A (en) | The collection picture approach and device of a kind of mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |