CN114581627B - ARHUD-based imaging method and system - Google Patents
ARHUD-based imaging method and system Download PDFInfo
- Publication number
- CN114581627B CN114581627B CN202210215102.XA CN202210215102A CN114581627B CN 114581627 B CN114581627 B CN 114581627B CN 202210215102 A CN202210215102 A CN 202210215102A CN 114581627 B CN114581627 B CN 114581627B
- Authority
- CN
- China
- Prior art keywords
- image
- arhud
- face
- facial
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 34
- 230000001815 facial effect Effects 0.000 claims abstract description 33
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 17
- 230000000694 effects Effects 0.000 claims abstract description 12
- 230000009466 transformation Effects 0.000 claims abstract description 6
- 230000003190 augmentative effect Effects 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 16
- 210000001145 finger joint Anatomy 0.000 claims description 11
- 210000004709 eyebrow Anatomy 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000009827 uniform distribution Methods 0.000 claims description 2
- 230000008859 change Effects 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 10
- 238000012937 correction Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/037—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention provides an ARHUD-based imaging method and an ARHUD-based imaging system, which are suitable for presenting processed change images on a head-up display screen ARHUD with an augmented reality effect on automobile front windshield. The imaging method comprises the following steps: continuously acquiring face images of a driver in a cabin through an in-vehicle camera, and acquiring a plurality of face key points in the face images; performing triangulation on the plurality of facial key points, and performing affine transformation on at least one part of the plurality of facial key points to a target key point to obtain a changed image; and projecting the altered image on the ARHUD. The ARHUD-based imaging method and system can realize the face image repairing treatment in the automobile scene, improve the cabin intelligent degree and improve the driving experience of users.
Description
Technical Field
The invention mainly relates to the field of intelligent cabins, in particular to an imaging method and system based on ARHUD.
Background
With the development of the field of intelligent automobiles, the requirements of users on intelligent automobiles are not limited to pursuits on automobile performance, and more intelligent functions are expected to be realized by the intelligent automobiles. The automobile is no longer provided with the performance of a vehicle, and when people drive the automobile, more intelligent auxiliary functions are more desirable.
The repair processing of photos or videos is a common thing in other fields, such as multimedia or video communication fields, but for the environment in a cabin, no scheme related to the processing of face images exists at present, and in some special scenes, such as scenes of in-car video conferences, the need of automobile intellectualization is not fully satisfied yet.
Disclosure of Invention
The invention aims to solve the technical problem of providing an ARHUD-based imaging method and an ARHUD-based imaging system, which can realize the face image repairing treatment in the automobile scene, improve the cabin intelligence degree and improve the automobile use experience of users.
In order to solve the technical problems, the invention provides an imaging method based on ARHUD, which is suitable for presenting processed change images on a head-up display screen ARHUD with an augmented reality effect on automobile front windshield, and comprises the following steps:
continuously acquiring face images of a driver in a cabin through an in-vehicle camera, and acquiring a plurality of face key points in the face images;
performing triangulation on the plurality of facial key points, and performing affine transformation on at least one part of the plurality of facial key points to a target key point to obtain a changed image; and
the altered image is projected on the ARHUD.
In an embodiment of the present invention, further comprising generating a plurality of meshes of uniform distribution in the face image, and affine transforming at least a part of the plurality of face keypoints to target keypoints in the plurality of meshes to obtain the altered image, before performing the triangulation process.
In an embodiment of the present invention, smoothing vertices of the plurality of meshes by kalman filtering or euro filtering is further included.
In an embodiment of the present invention, further comprising:
continuously acquiring a front image of the vehicle through an external camera;
when detecting that a pedestrian appears in the front image, detecting key points of the front image with the pedestrian picture so as to obtain image coordinates of the eyebrow position of the pedestrian;
acquiring world coordinates of the eyebrow position of the pedestrian according to the internal parameters and the external parameters of the vehicle-outside camera and the depth information of the pedestrian compared with the vehicle-outside camera;
acquiring the sight line direction of the pedestrian, and acquiring world coordinates corresponding to each pixel point in a face image of a driver according to the sight line direction, the internal parameters and the external parameters of the camera in the vehicle and the depth information of the driver compared with the camera in the vehicle; and
and according to the world coordinates of the eyebrow positions of the pedestrians and the world coordinates corresponding to each pixel point in the face image of the driver, displaying the changed image at the corresponding position outside the front windshield of the automobile through horizontal mirroring.
In an embodiment of the invention, the method further comprises the steps of carrying out matting processing on the face part in the face image of the driver to obtain a face part image, and presenting the face part image at a corresponding position outside a front windshield of the automobile through horizontal mirroring processing.
In an embodiment of the present invention, after the changing image is presented at a corresponding position outside a front windshield of the automobile through a horizontal mirroring process, the method further includes a step of increasing a display brightness of the ARHUD.
In an embodiment of the present invention, the method further includes a step of performing distortion correction processing on the ARHUD in advance.
In an embodiment of the present invention, at least a portion of the plurality of facial keypoints is affine transformed to the target keypoints in response to any one of a slide bar key on a central control screen, a voice input, a mobile-side application instruction, or a slide bar control on an ARHUD.
In one embodiment of the invention, the control of the slide bar on the ARHUD comprises displaying an image of the slide bar on the ARHUD, acquiring world coordinates of each pixel point of the slide bar, identifying one or more finger joints by the in-vehicle camera, acquiring world coordinates of the finger joints and connecting the finger joints into a straight line,
and if the straight line is positioned in the preset area of the slide rod and the finger does the appointed action, triggering the slide rod to move along with the finger so as to realize affine transformation of at least one part of the plurality of facial key points to the target key point.
In an embodiment of the present invention, the plurality of facial keypoints are two-dimensional facial keypoints, and the method further comprises:
after the triangulation processing is carried out on the plurality of facial key points, converting the two-dimensional facial key points into three-dimensional facial key points through a three-dimensional face reconstruction method;
affine transforming at least a portion of the three-dimensional plurality of facial keypoints to a target keypoint;
converting the affine transformed three-dimensional face keypoints back to two-dimensional face keypoints to obtain the altered image; and
the altered image is projected on the ARHUD and/or on a side windshield of the vehicle.
In order to solve the technical problem, the present invention further provides an in-car imaging system based on an ARHUD, including:
a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the ARHUD-based imaging method described above.
A computer readable medium storing computer program code which, when executed by a processor, implements the ARHUD-based imaging method described above.
Compared with the prior art, the invention has the following advantages: the ARHUD-based imaging method and system realize the face image repairing treatment in the application scene of the automobile, and present the face image with special effect to the outside of the automobile or through the multimedia in the automobile according to different requirements, thereby improving the intelligence of the driving and the use of the automobile; particularly, a grid calibration algorithm can be used for replacing a triangulation algorithm mode, so that the acquisition of the key points of the human face can be kept stable even if shaking occurs in the running process of the automobile.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the principles of the invention. In the accompanying drawings:
FIG. 1 is a flow chart of an ARHUD-based imaging method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an ARHUD-based imaging method according to an embodiment of the invention; and
fig. 3 is a system block diagram of an ARHUD-based imaging system in accordance with an embodiment of the invention.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application may be applied to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In the description of the present application, it should be understood that, where azimuth terms such as "front, rear, upper, lower, left, right", "transverse, vertical, horizontal", and "top, bottom", etc., indicate azimuth or positional relationships generally based on those shown in the drawings, only for convenience of description and simplification of the description, these azimuth terms do not indicate and imply that the apparatus or elements referred to must have a specific azimuth or be constructed and operated in a specific azimuth, and thus should not be construed as limiting the scope of protection of the present application; the orientation word "inner and outer" refers to inner and outer relative to the contour of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "upper surface at … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial location relative to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the exemplary term "above … …" may include both orientations of "above … …" and "below … …". The device may also be positioned in other different ways (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
In addition, the terms "first", "second", etc. are used to define the components, and are merely for convenience of distinguishing the corresponding components, and unless otherwise stated, the terms have no special meaning, and thus should not be construed as limiting the scope of the present application. Furthermore, although terms used in the present application are selected from publicly known and commonly used terms, some terms mentioned in the specification of the present application may be selected by the applicant at his or her discretion, the detailed meanings of which are described in relevant parts of the description herein. Furthermore, it is required that the present application be understood, not simply by the actual terms used but by the meaning of each term lying within.
It will be understood that when an element is referred to as being "on," "connected to," "coupled to," or "contacting" another element, it can be directly on, connected or coupled to, or contacting the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly connected to," "directly coupled to," or "directly contacting" another element, there are no intervening elements present. Likewise, when a first element is referred to as being "electrically contacted" or "electrically coupled" to a second element, there are electrical paths between the first element and the second element that allow current to flow. The electrical path may include a capacitor, a coupled inductor, and/or other components that allow current to flow even without direct contact between conductive components.
Referring to fig. 1, an embodiment of the present invention proposes an ARHUD-based imaging method 10 (hereinafter referred to as "imaging method 10") adapted to present a processed altered image on a head-up display ARHUD with an augmented reality effect on a front windshield of a vehicle. Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously. At the same time, other operations are added to or removed from these processes.
According to fig. 1, the imaging method 10 includes the following steps.
Step 11 is to continuously acquire face images of a driver in a cabin through an in-vehicle camera, and acquire a plurality of face key points in the face images. For example, these facial keypoints may be selected for facial contours, eyebrows, oronasal, etc. to provide for the repair of images in later steps.
Further, step 12 is to perform triangulation processing on the plurality of face keypoints, and affine transform at least a part of the plurality of face keypoints to a target keypoint to obtain a modified image. By way of example, the triangulation means may refer to conventional triangulation means in mathematical calculations, such that the facial image of a person is given special treatment of feature points (key points) of part of the face part thereof by means of splitting a plurality of triangle patches. In this step, the affine transformation is used to move part of the facial key points from the original real positions to the target positions, so that the effect of shaving, such as common shaving treatment of pinching faces, can be achieved.
Finally, step 13 is to project the change image on the ARHUD. In a more general embodiment of the invention, the ARHUD is shown as a head-up display with enhanced display, positioned in front of the cabin in which the driver is located, and positioned on the windshield of the car for the display of the change image. However, the present invention is not limited to this, and in some other embodiments of the present invention, the ARHUD may be located on another glass according to the needs of the practical application, and the present invention is not limited to the conventional location of the ARHUD.
Preferably, in order to overcome the problem of unstable detection of keypoints caused by shake during running of an automobile, in some embodiments of the present invention, further comprising generating a plurality of grids uniformly distributed in the face image, and affine transforming at least a part of the plurality of face keypoints to target keypoints in the plurality of grids to obtain a modified image, before performing the triangulation process involved in step 12 shown in fig. 1. Further preferably, in such an embodiment, the method further includes smoothing vertices of the plurality of grids by kalman filtering or euro filtering to make the effect presented by the changed image better. In such an embodiment, the advantage of using a grid for processing is: the grid vertex is in the triangle after triangulation with high probability, so that the value of the grid vertex is obtained by the difference value of three key points, and the grid vertex is relatively more stable. Secondly, the key points of the face are positioned at the edge of the face in the original scheme, so that the phenomenon of unsmooth face is easily caused under the condition of low-resolution images or fewer key point positions, and the phenomenon of unsmooth can be further reduced by adopting a grid means, so that the effect of the method is better for the application scene of the automobile.
Generally, the imaging method 10 described above is more suitable for a user in a cabin to apply this function in the cabin, for example, in some specific application scenarios, such as in-vehicle video conference, a face image of the user may be modified in the cabin, for example, to finally exhibit a pinching effect.
On the basis of the above description, the solution of the invention can be further extended and extended in some preferred embodiments. In some preferred embodiments of the invention, the ARHUD-based imaging method further comprises the steps of:
continuously acquiring a front image of the vehicle through an external camera;
when detecting that pedestrians appear in the front images, detecting key points of the front images with the pedestrian pictures to obtain image coordinates of the eyebrow positions of the pedestrians;
acquiring world coordinates of the eyebrow position of the pedestrian according to the internal parameters and the external parameters of the camera outside the vehicle and the depth information of the pedestrian compared with the camera outside the vehicle;
acquiring the sight line direction of a pedestrian, and acquiring world coordinates corresponding to each pixel point in a face image of a driver according to the sight line direction, the internal parameters and the external parameters of the camera in the vehicle and the depth information of the driver compared with the camera in the vehicle; and
and according to the world coordinates of the eyebrow positions of the pedestrians and the world coordinates corresponding to each pixel point in the face image of the driver, displaying the changed image at the corresponding position outside the front windshield of the automobile through horizontal mirroring.
In this way, pedestrians outside the automobile can see the facial image after the user in the automobile is subjected to the face repair treatment. In such an embodiment of the present invention, in order to better present the effect after the repair, the scheme is more suitable for the case where the pedestrian is only 1 person.
Further, in such an embodiment, in order to further enhance the effect of the face image of the user in the cabin being presented outside the vehicle, the method further includes the steps of obtaining a face part image by matting the face part in the face image of the driver, and presenting the face part image at a corresponding position outside the front windshield of the vehicle by horizontal mirroring. In addition, after the changed image is presented at the corresponding position outside the front windshield of the automobile through horizontal mirror image processing, the method further comprises the step of increasing the display brightness of the ARHUD, so that the user image has better effect when being presented outside the automobile.
Preferably, in such an embodiment, in order to overcome the influence of the front windshield of the automobile on the imaging result due to the curved surface, the method further includes the step of performing the distortion correction processing on the ARHUD in advance before applying the scheme of the present invention. Details of the distortion correction may be found in conventional manners in the art, for example, by transmitting punctiform feature points to the windshield via an ARHUD and calibrating the same, and the detailed description thereof will not be repeated herein since the specific embodiment of the distortion correction is not the focus of the present invention.
Further, the present invention exemplarily proposes some feasible means for how to control the above-mentioned scheme of image processing. For example, in some embodiments of the invention, at least a portion of the plurality of facial keypoints is affine transformed to the target keypoints in response to any one of a slide bar key located on a central control screen, a voice input, a mobile-side application instruction, or a slide bar control located on an ARHUD. In these modes, the slide bar key can be configured on the central control screen, and the user can realize control by touching the central control screen and the like. Further, for the control mode of the voice input and the mobile terminal application program instruction, other signal transmission and input configuration are needed, for example, the voice input mode needs to configure a voice receiving and processing module by a vehicle-mounted device, and signal interaction is performed with the processing module for adjusting the position of the facial key point, or control is realized through the mobile terminal application program instruction (such as the APP of the mobile terminal). The control modes are common means for intelligent control in automobiles in the prior art, and can be directly applied to the technical scheme of the invention, so that the scheme of the ARHUD-based imaging method of the invention is realized.
In particular, for the control mode of the sliding rod on the ARHUD, the specific operation steps are as follows: the method comprises the steps of displaying an image of a sliding rod on an ARHUD, acquiring world coordinates of each pixel point of the sliding rod, identifying one or more finger joint points through a camera in a vehicle, acquiring the world coordinates of the finger joint points, and connecting all the finger joint points into a straight line, wherein if the straight line is positioned in a preset area of the sliding rod and a finger performs a specified action, the sliding rod is triggered to move along with the finger, so that the target position required to move by at least one part of a plurality of facial key points is regulated.
Specifically, referring to fig. 2, a brief description will be made of the principle of a slide rod control method using an ARHUD according to an embodiment of the present invention. As shown in fig. 2, after the position 21 of the area where the slide rod can move is obtained (in this embodiment, the moving area is elliptical, but the present invention is not limited thereto), the two finger joints 221 and 222 and the respective coordinates are identified by the in-vehicle camera, and the two finger joints 221 and 222 are connected into a straight line 20. In practical application, after identifying whether the straight line 20 is at the movable region position 21 of the slide rod, the target position of at least a part of the plurality of facial key points required to be moved can be moved according to the designated action of the movement of the finger, so that the corresponding repairing action is completed. Compared with other control schemes applicable to the invention, such as a control scheme on a central control screen, the control method provided by the invention can be more intelligently beneficial to finishing the facial image repairing work of a driver in a cabin on the ARHUD facing the front of the driver, and can realize corresponding operation without transferring the sight to the central control screen.
An embodiment of the present invention also proposes an ARHUD-based in-car imaging system 30 as shown in fig. 3. According to fig. 3, the ARHUD-based in-car imaging system 30 may include an internal communication bus 31, a Processor (Processor) 32, a Read Only Memory (ROM) 33, a Random Access Memory (RAM) 34, and a communication port 35. The ARHUD-based in-car imaging system 30 may also include a hard disk 36 when applied to a personal computer.
The internal communication bus 31 may enable data communication between components of the ARHUD-based in-car imaging system 30. Processor 32 may make the determination and issue a prompt. In some embodiments, processor 32 may be comprised of one or more processors. The communication port 35 may enable data communication between the ARHUD-based in-car imaging system 30 and the outside. In some embodiments, the ARHUD-based in-car imaging system 30 may send and receive information and data from the network through the communication port 35.
The ARHUD-based in-car imaging system 30 may also include various forms of program storage units as well as data storage units, such as a hard disk 36, read Only Memory (ROM) 33 and Random Access Memory (RAM) 34, capable of storing various data files for computer processing and/or communication, and possibly program instructions for execution by the processor 32. The processor executes these instructions to implement the main part of the method. The result processed by the processor is transmitted to the user equipment through the communication port and displayed on the user interface.
In addition to this, another aspect of the invention proposes a computer readable medium storing computer program code which, when executed by a processor, implements the above-mentioned ARHUD-based in-car imaging method.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the above disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations of the present application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this application, and are therefore within the spirit and scope of the exemplary embodiments of this application.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present application may be combined as suitable.
Some aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital signal processing devices (DAPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media. For example, computer-readable media can include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, tape … …), optical disk (e.g., compact disk CD, digital versatile disk DVD … …), smart card, and flash memory devices (e.g., card, stick, key drive … …).
The computer readable medium may comprise a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer readable medium can be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer readable medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, radio frequency signals, or the like, or a combination of any of the foregoing.
Likewise, it should be noted that in order to simplify the presentation disclosed herein and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
While the present application has been described with reference to the present specific embodiments, those of ordinary skill in the art will recognize that the above embodiments are for illustrative purposes only, and that various equivalent changes or substitutions can be made without departing from the spirit of the present application, and therefore, all changes and modifications to the embodiments described above are intended to be within the scope of the claims of the present application.
Claims (9)
1. An ARHUD-based imaging method suitable for presenting a processed altered image on a head-up display screen ARHUD with an augmented reality effect on an automotive front windshield, comprising the steps of:
continuously acquiring face images of a driver in a cabin through an in-vehicle camera, and acquiring a plurality of face key points in the face images;
performing triangulation on the plurality of facial key points, and performing affine transformation on at least one part of the plurality of facial key points to a target key point to obtain a changed image; and
projecting the altered image on the ARHUD;
the method further comprises the steps of: continuously acquiring a front image of the vehicle through an external camera;
when detecting that a pedestrian appears in the front image, detecting key points of the front image with the pedestrian picture so as to obtain image coordinates of the eyebrow position of the pedestrian;
acquiring world coordinates of the eyebrow position of the pedestrian according to the internal parameters and the external parameters of the vehicle-outside camera and the depth information of the pedestrian compared with the vehicle-outside camera;
acquiring the sight line direction of the pedestrian, and acquiring world coordinates corresponding to each pixel point in a face image of a driver according to the sight line direction, the internal parameters and the external parameters of the camera in the vehicle and the depth information of the driver compared with the camera in the vehicle; and
according to the world coordinates of the pedestrian eyebrow position and the world coordinates corresponding to each pixel point in the driver facial image, the changed image is displayed at a corresponding position outside the front windshield of the automobile through horizontal mirroring;
the method further comprises the steps of: affine transforming at least a portion of the plurality of facial keypoints to the target keypoint in response to any one of a slide bar key located on a central control screen, a voice input, a mobile end application instruction, or a slide bar control located on an ARHUD;
wherein the control of the sliding rod on the ARHUD comprises displaying an image of the sliding rod on the ARHUD, acquiring world coordinates of each pixel point of the sliding rod, identifying one or more finger joint points through the in-vehicle camera, acquiring the world coordinates of the finger joint points and connecting the finger joint points into a straight line,
and if the straight line is positioned in the preset area of the slide rod and the finger does the appointed action, triggering the slide rod to move along with the finger so as to realize affine transformation of at least one part of the plurality of facial key points to the target key point.
2. The method of claim 1, further comprising generating a plurality of meshes of uniform distribution in the face image, prior to performing the triangulation process, affine transforming at least a portion of a plurality of face keypoints to target keypoints in the plurality of meshes to obtain the altered image.
3. The method of claim 2, further comprising smoothing vertices of the plurality of meshes by kalman filtering or an euro filtering.
4. The method of claim 1, further comprising obtaining a face portion image by matting the face portion in the face image of the driver and presenting the face portion image in a corresponding position outside a front windshield of the vehicle by horizontal mirroring.
5. The method according to claim 1 or 4, further comprising the step of increasing the display brightness of the ARHUD after the altered image is presented at a corresponding position outside the front windshield of the automobile by a horizontal mirroring process.
6. The method of claim 1, further comprising the step of pre-distortion correcting said ARHUD.
7. The method of claim 1, wherein the plurality of facial keypoints are two-dimensional facial keypoints, the method further comprising:
after the triangulation processing is carried out on the plurality of facial key points, converting the two-dimensional facial key points into three-dimensional facial key points through a three-dimensional face reconstruction method;
affine transforming at least a portion of the three-dimensional plurality of facial keypoints to a target keypoint;
converting the affine transformed three-dimensional face keypoints back to two-dimensional face keypoints to obtain the altered image; and
the altered image is projected on the ARHUD and/or on a side windshield of the vehicle.
8. An ARHUD-based in-car imaging system, comprising:
a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method of any one of claims 1-7.
9. A computer readable medium storing computer program code which, when executed by a processor, implements the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210215102.XA CN114581627B (en) | 2022-03-04 | 2022-03-04 | ARHUD-based imaging method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210215102.XA CN114581627B (en) | 2022-03-04 | 2022-03-04 | ARHUD-based imaging method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114581627A CN114581627A (en) | 2022-06-03 |
CN114581627B true CN114581627B (en) | 2024-04-16 |
Family
ID=81778166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210215102.XA Active CN114581627B (en) | 2022-03-04 | 2022-03-04 | ARHUD-based imaging method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114581627B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10643085B1 (en) * | 2019-01-30 | 2020-05-05 | StradVision, Inc. | Method and device for estimating height and weight of passengers using body part length and face information based on human's status recognition |
CN111476709A (en) * | 2020-04-09 | 2020-07-31 | 广州华多网络科技有限公司 | Face image processing method and device and electronic equipment |
WO2021174939A1 (en) * | 2020-03-03 | 2021-09-10 | 平安科技(深圳)有限公司 | Facial image acquisition method and system |
WO2021197190A1 (en) * | 2020-03-31 | 2021-10-07 | 深圳光峰科技股份有限公司 | Information display method, system and apparatus based on augmented reality, and projection device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6727807B2 (en) * | 2001-12-14 | 2004-04-27 | Koninklijke Philips Electronics N.V. | Driver's aid using image processing |
WO2017113403A1 (en) * | 2015-12-31 | 2017-07-06 | 华为技术有限公司 | Image information processing method and augmented reality ar device |
JP6897082B2 (en) * | 2016-12-13 | 2021-06-30 | 富士通株式会社 | Computer program for face orientation estimation, face orientation estimation device and face orientation estimation method |
US10872254B2 (en) * | 2017-12-22 | 2020-12-22 | Texas Instruments Incorporated | Digital mirror systems for vehicles and methods of operating the same |
-
2022
- 2022-03-04 CN CN202210215102.XA patent/CN114581627B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10643085B1 (en) * | 2019-01-30 | 2020-05-05 | StradVision, Inc. | Method and device for estimating height and weight of passengers using body part length and face information based on human's status recognition |
WO2021174939A1 (en) * | 2020-03-03 | 2021-09-10 | 平安科技(深圳)有限公司 | Facial image acquisition method and system |
WO2021197190A1 (en) * | 2020-03-31 | 2021-10-07 | 深圳光峰科技股份有限公司 | Information display method, system and apparatus based on augmented reality, and projection device |
CN111476709A (en) * | 2020-04-09 | 2020-07-31 | 广州华多网络科技有限公司 | Face image processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114581627A (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11715238B2 (en) | Image projection method, apparatus, device and storage medium | |
DE102019112175A1 (en) | DISTANCE CORRECTION FOR VEHICLE SURROUND VIEW CAMERA PROJECTIONS | |
JP6411505B2 (en) | Method and apparatus for generating an omnifocal image | |
CN110766777A (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN106846410B (en) | Driving environment imaging method and device based on three dimensions | |
US10013761B2 (en) | Automatic orientation estimation of camera system relative to vehicle | |
JP2023515654A (en) | Image optimization method and device, computer storage medium, computer program, and electronic equipment | |
CN113256529B (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN111062981A (en) | Image processing method, device and storage medium | |
CN108734078B (en) | Image processing method, image processing apparatus, electronic device, storage medium, and program | |
KR102637901B1 (en) | A method of providing a dolly zoom effect by an electronic device and the electronic device utilized in the method | |
CN112927362A (en) | Map reconstruction method and device, computer readable medium and electronic device | |
CN110163906B (en) | Point cloud data processing method and device, electronic equipment and storage medium | |
CN114283050A (en) | Image processing method, device, equipment and storage medium | |
CN113785263A (en) | Virtual model for communication between an autonomous vehicle and an external observer | |
CN113366491A (en) | Eyeball tracking method, device and storage medium | |
CN111680758B (en) | Image training sample generation method and device | |
CN111476151A (en) | Eyeball detection method, device, equipment and storage medium | |
Kurdthongmee et al. | A yolo detector providing fast and accurate pupil center estimation using regions surrounding a pupil | |
CN113936089A (en) | Interface rendering method and device, storage medium and electronic equipment | |
CN114581627B (en) | ARHUD-based imaging method and system | |
CN110941327A (en) | Virtual object display method and device | |
DE102021117294A1 (en) | 3D texturing using a render loss | |
CN113362260A (en) | Image optimization method and device, storage medium and electronic equipment | |
CN111103967A (en) | Control method and device of virtual object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang Applicant after: United New Energy Automobile Co.,Ltd. Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang Applicant before: Hozon New Energy Automobile Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |