WO2023174283A1 - 基于视觉补偿图像的防晕车方法、装置和系统 - Google Patents

基于视觉补偿图像的防晕车方法、装置和系统 Download PDF

Info

Publication number
WO2023174283A1
WO2023174283A1 PCT/CN2023/081367 CN2023081367W WO2023174283A1 WO 2023174283 A1 WO2023174283 A1 WO 2023174283A1 CN 2023081367 W CN2023081367 W CN 2023081367W WO 2023174283 A1 WO2023174283 A1 WO 2023174283A1
Authority
WO
WIPO (PCT)
Prior art keywords
road
visual compensation
vehicle
image
texture
Prior art date
Application number
PCT/CN2023/081367
Other languages
English (en)
French (fr)
Inventor
居然
安平
张乐韶
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023174283A1 publication Critical patent/WO2023174283A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Definitions

  • the present application relates to the field of transportation, and more specifically, to an anti-motion sickness method, device and system based on visual compensation images.
  • Passengers in the car (such as rear passengers) get motion sickness because the closed environment in the car blocks the line of sight, and the visual channel cannot provide effective feedback on the motion status of the passengers, that is, it cannot provide the passengers with the motion status of the car body relative to the ground.
  • the vestibular system located in the ear can sense the movement of the vehicle body relative to the ground, resulting in a conflict between vision and vestibular nerve perception.
  • Passengers may experience motion sickness symptoms such as fainting, nausea, and loss of appetite.
  • motion sickness can be solved or prevented by wearing special anti-motion sickness glasses, but passengers need to wear glasses, which are not very comfortable to wear, not user-friendly enough, and have a low user experience.
  • This application provides an anti-motion sickness method, device and system based on visual compensation images.
  • the visual compensation image can reflect the real-time motion state of the vehicle relative to the road (or the ground), and the visual compensation image includes the road and lane lines.
  • This visual compensation image is superimposed on the display screen of the vehicle infotainment system for display. The user can see his or her own motion relative to the ground on the display screen, thereby alleviating or eliminating the conflict between the visual nerve and the vestibular nerve, thus solving the problem. Got the problem of motion sickness.
  • the user does not need to wear other special glasses, which is user-friendly and improves the user experience.
  • an anti-motion sickness method based on visual compensation images includes: generating a visual compensation image that reflects the real-time motion state of the vehicle relative to the road from the perspective of passengers in the vehicle.
  • the visual compensation image Includes road sections and lane markings; this visually compensated image is displayed on the display of the in-car infotainment system.
  • the first aspect provides an anti-motion sickness method based on a visual compensation image.
  • the visual compensation image can reflect the real-time motion state of the vehicle relative to the road.
  • the visual compensation image includes the road and lane lines, the road and lane lines are different colors respectively.
  • This visual compensation image is superimposed on the display screen of the vehicle infotainment system for display. The user can see his or her own motion relative to the ground on the display screen, thereby alleviating or eliminating the conflict between the visual nerve and the vestibular nerve, thus solving the problem. Got the problem of motion sickness.
  • generating a visual compensation image includes: acquiring an image captured in real time by a driving recorder in the vehicle; detecting the road portion and lane lines in the image; and combining the road portion and lane lines in the image.
  • the lane lines are filled with a non-transparent color, and the road portion and the portion outside the lane lines in the image are filled with a transparent color to obtain the visual compensation image.
  • the road portion and the lane lines in the visual compensation image have different colors.
  • the image captured in real time by the driving recorder can reflect the real-time movement state of the vehicle relative to the road (or surrounding environment) from the perspective of the passengers in the car, and the road portion and lane lines are filled in non-transparent, the The other areas except the road part and lane lines are filled with transparency to obtain a visual compensation image, which can avoid the impact of other areas in the image captured by the driving recorder (that is, the area other than the road part and lane lines) on the image.
  • Interference that is, avoiding the interference of the road background in the visual compensation image, has better visualization effects, and can improve the accuracy of the visual compensation image on the basis of solving the motion sickness of passengers.
  • generating a visual compensation image includes: obtaining an image captured in real time by a driving recorder in the vehicle; detecting the road portion and lane lines in the image; converting the image into a binary image, In the binary image, the road part and the lane line are white, and the road part and the part outside the lane line are black; fill the road part and lane lines in the binary image with a non-transparent color, and fill the road part and the lane line with a non-transparent color.
  • the road portion and the portion outside the lane lines in the binary image are filled with a transparent color to obtain the visual compensation image, in which the road portion and the lane lines have different colors.
  • the image captured in real time by the driving recorder can reflect the real-time motion state of the vehicle relative to the road (or surrounding environment) from the perspective of the passengers in the car, and because the pixels in the binary image are filled with color, It can reduce the complexity and calculation amount of pixel filling color and is easy to implement. Moreover, other areas except the road part and lane lines are filled with transparency, which avoids the interference of the road background in the image and has better visualization effect.
  • generating a visual compensation image includes: obtaining motion parameters detected in real time by a gyroscope and accelerometer in the vehicle; and generating a three-dimensional rotation vector (r x , ry y ) of the vehicle based on the motion parameters. , r z ) and the three-dimensional translation vector (t x , t y , t z ); using the three-dimensional rotation vector (r x , ry , r z ) and the three-dimensional translation vector (t x , ty , t z ), Process the road texture in the preset road model.
  • the road texture includes a road part and a lane line.
  • the road part and the lane line have different colors; superimpose the processed road texture on the road model; superimpose the road texture on the road model.
  • the processed road texture road model undergoes rigid body transformation and perspective transformation to generate a visual compensation image.
  • the data detected in real time by motion sensors such as gyroscopes and accelerometers is used to process the preset road model image to generate a visual compensation image without the need for cameras, calculations and data transmission.
  • the quantity is relatively low.
  • the three-dimensional rotation vector (r x , ry , r z ) and the three-dimensional translation vector (t x , ty , t z ) are used to perform the preset road model Processing includes: using the curvature r z on the Z axis in the three-dimensional rotation vector to apply bending deformation to the road texture; using the velocity t y on the Y axis in the three-dimensional translation vector to apply cyclic movement deformation to the road texture; where, In the three-dimensional rotation vector (r x , ry , r z ) and the three-dimensional translation vector (t x , ty , t z ), the positive direction of the X-axis is the right-hand direction of the vehicle body, and the positive direction of the Y-axis is the forward direction of the vehicle front.
  • the positive direction of the Z-axis is directly above the vehicle body.
  • the processed road texture can be obtained. Since the above processing is real-time, the processed road texture can reflect the real-time motion status of the vehicle. That is to say, in the processed road texture, the shape and position of the road part and lane lines change in real time with the movement of the vehicle. The changes in the road part and lane lines in the road texture can reflect the relative position of the vehicle to the road part. and lane line real-time movement status.
  • the deformation of the road texture can reflect the movement state of the vehicle around the Z-axis during real-time movement. For example, the movement state of the vehicle when turning left and right can be reflected from the deformation of the road texture. come out.
  • u' represents the value of the u-axis after the road texture produces the corresponding curvature r z
  • v' represents the value of the v-axis after the road texture produces the corresponding curvature r z
  • u represents the road texture before producing the corresponding curvature r z
  • the value of the u-axis, v represents the value of the v-axis before the road texture generates the corresponding curvature r z before bending
  • k is the parameter that controls the steering angle and texture curvature.
  • the texture coordinate system includes the u-axis.
  • the u-axis is the direction perpendicular to the lane line
  • the v-axis is the direction parallel to the lane line.
  • the deformation of the road texture can reflect the movement state of the vehicle around the Y-axis during real-time movement.
  • the bumpy state of the vehicle on a rough road can be reflected from The deformation of the road texture is reflected.
  • u' represents the value of the u-axis after the road texture produces a cyclic movement corresponding to the speed t y
  • v' represents the value of the v-axis after the road texture produces a cyclic movement corresponding to the speed ty
  • u represents the value of the road texture produces the corresponding speed t y .
  • v represents the value of the v-axis before the road texture generates the corresponding speed ty y cyclic movement
  • s is the parameter that controls the texture movement speed
  • there is a texture coordinate system in the road texture the texture coordinate system includes u axis and v-axis
  • the u-axis is the direction perpendicular to the lane line
  • the v-axis is the direction parallel to the lane line.
  • rigid body transformation is performed on the road model superimposed with the processed road texture, including: using vectors (r x , ry , t z ) to superimpose the processed road texture.
  • the road model undergoes rigid body transformation.
  • the visual compensation image becomes the visual compensation image from the perspective of the passenger in the car after perspective transformation, which can improve the accuracy of the visual compensation image.
  • displaying the visual compensation image on the display screen of the vehicle entertainment information system includes: superimposing and displaying the visual compensation image on the main interface displayed on the display screen.
  • the visual compensation image is used to stimulate the user's vision while reducing or eliminating the impact of the visual compensation image on the main interface displayed on the display screen. That is, on the basis of alleviating or eliminating the conflict between the visual nerve and the vestibular nerve, the impact on the user's use of the in-vehicle entertainment information system is reduced, thereby further improving the user experience.
  • C represents the visual compensation image
  • I represents the main interface displayed on the display screen
  • is the transparency parameter during superimposed display
  • the value range of ⁇ is less than 1 and greater than
  • A represents the final display on the display screen.
  • is pre-configured.
  • the transparency of the visual compensation image can be automatically adjusted to improve user experience.
  • the display interface of the display screen also has a function for the user to manually adjust the Adjust the control that visually compensates for image transparency. Users can use this control to manually adjust the transparency of the visual compensation image according to their own needs. This can further meet the needs of users. Users can adjust the transparency of visual compensation images in real time as needed to improve user experience.
  • the in-vehicle computing platform can generate a visual compensation image in real time, or the in-vehicle entertainment information system can process images captured in real time by a driving recorder to generate a visual compensation image.
  • the in-vehicle entertainment information system uses the data detected in real time by motion sensors such as gyroscopes and accelerometers in the car to process the preset road model image to generate a visual compensation image.
  • the terminal device used by the passengers in the car can process the images captured in real time by the driving recorder to generate visual compensation images, or use the motion sensors such as gyroscopes and accelerometers in the car to process them in real time.
  • the detected data is used to process the preset road model image to generate a visual compensation image.
  • the terminal device can send the visual compensation image to the in-vehicle entertainment information system.
  • the terminal device used by the passengers in the car After the terminal device used by the passengers in the car generates the visual compensation image, it can also be finally displayed on the display screen of the terminal device.
  • the visual compensation image can be displayed on the main interface in the form of a floating window, or can be displayed on the display screen in a split-screen manner with the main interface, or can also be displayed on the display screen by overlaying images. .
  • an anti-motion sickness device based on visual compensation images.
  • the device includes: a processor and a memory; the processor is coupled to the memory, and the memory stores program instructions. When the program instructions stored in the memory are processed by the When the processor is executed, the method in the above first aspect or any one of the possible implementations of the first aspect is executed.
  • an anti-motion sickness device based on visual compensation images.
  • the device includes at least one processor and an interface circuit. At least one processor is used to execute the above first aspect or any possible implementation of the first aspect. method within the method.
  • the anti-motion sickness device based on visual compensation images can be an in-vehicle computing platform, an in-vehicle entertainment information system, a terminal device used by in-vehicle passengers, etc., or an in-vehicle computing platform, an in-vehicle entertainment information system, an in-vehicle entertainment information system, or a terminal device used by in-vehicle passengers.
  • the terminal equipment includes the anti-motion sickness device based on visual compensation images.
  • an anti-motion sickness system based on visual compensation images includes: an in-vehicle computing platform and an in-vehicle entertainment information system.
  • the system is used to perform the above first aspect or any one of the possible aspects of the first aspect. method in the implementation.
  • the system further includes: at least one of a driving recorder and a motion sensor, where the motion sensor includes a gyroscope and an accelerometer.
  • a vehicle which vehicle includes: the anti-motion sickness device based on the visual compensation image provided in the second aspect or the third aspect, or the fourth aspect or the possible implementation of any one of the fourth aspects.
  • Anti-motion sickness system based on visual compensation images provided in .
  • a computer program product includes a computer program. When executed by a processor, the computer program is used to perform the method in the first aspect or any possible implementation of the first aspect. .
  • a computer-readable storage medium is provided.
  • a computer program is stored in the computer-readable storage medium. When the computer program is executed, it is used to execute the first aspect or any possibility in the first aspect. method in the implementation.
  • a chip in an eighth aspect, includes: a processor for calling and running a computer program from a memory, so that a communication device installed with the chip executes the first aspect or any possible implementation of the first aspect. method within the method.
  • FIG. 1 is a schematic diagram of a communication system architecture suitable for the embodiment of the present application provided by the embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an example of an anti-motion sickness method based on visual compensation images provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an example of an in-vehicle computing platform using images captured by a driving recorder in real time to generate visual compensation images provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an example of a visual compensation image generated by an in-vehicle computing platform using images captured by a driving recorder in real time according to an embodiment of the present application.
  • FIG. 5 is a schematic flow chart of an in-vehicle computing platform using real-time detection data of gyroscopes and accelerometers to generate visual compensation images provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of the X-axis, Y-axis, and Z-axis on a vehicle provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of an example of a road model provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram before and after applying cyclic movement deformation and bending deformation to the road texture provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a visual compensation image from the perspective of a passenger in a vehicle obtained by applying perspective transformation to a road model provided by an embodiment of the present application.
  • FIG. 10 is a schematic interface diagram of an example of superimposing and displaying a visual compensation image on a display screen provided by an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of an example of the structure of an anti-motion sickness device based on visual compensation images provided by an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of the structure of another anti-motion sickness device based on visual compensation images provided by the embodiment of the present application.
  • FIG. 13 is a schematic block diagram of an example chip system structure provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • various aspects or features of the present application may be implemented as methods, apparatus, or using standard programming and/or engineering.
  • Artifacts of technology encompasses a computer program accessible from any computer-readable device, carrier or medium.
  • computer-readable media may include, but are not limited to: magnetic storage devices (eg, hard disks, floppy disks, tapes, etc.), optical disks (eg, compact discs (CD), digital versatile discs (DVD)) etc.), smart cards and flash memory devices (e.g. erasable programmable read-only memory (EPROM), cards, sticks or key drives, etc.).
  • various storage media described herein may represent one or more devices and/or other machine-readable media for storing information.
  • machine-readable medium may include, but is not limited to, wireless channels and various other media capable of storing, containing and/or carrying instructions and/or data.
  • GSM Global System of Mobile communication
  • CDMA Code Division Multiple Access
  • WCDMA broadband code division multiple access
  • GPRS General Packet Radio Service
  • LTE Long Term Evolution
  • FDD Frequency Division Duplex
  • TDD Time Division Duplex
  • UMTS Universal Mobile Telecommunication System
  • WiMAX Worldwide Interoperability for Microwave Access
  • various aspects or features of the present application may be implemented as methods, apparatus, or articles of manufacture using standard programming and/or engineering techniques.
  • article of manufacture encompasses a computer program accessible from any computer-readable device, carrier or medium.
  • computer-readable media may include, but are not limited to: magnetic storage devices (e.g., hard disks, floppy disks, tapes, etc.), optical disks (e.g., compact discs (CD), digital versatile discs (DVD)) etc.), smart cards and flash memory devices (e.g. erasable programmable read-only memory (EPROM), cards, sticks or key drives, etc.).
  • various storage media described herein may represent one or more devices and/or other machine-readable media for storing information.
  • machine-readable medium may include, but is not limited to, wireless channels and various other media capable of storing, containing and/or carrying instructions and/or data.
  • motion sickness When the movement perceived by the human eye does not match the movement perceived by the vestibular system located in the ear, symptoms such as fainting, nausea, and loss of appetite will appear. Medically known as motion sickness, this symptom is easily seen in Bumpy enclosed environments, such as cars, ships, airplanes, etc. For example: Passengers sitting in the back seat of a car are often prone to motion sickness, which is our common motion sickness. Passengers get motion sickness because the closed environment in the car blocks their line of sight. The visual channel cannot provide effective feedback on the motion status of the passengers, that is, it cannot provide passengers with the motion status of the car body relative to the ground, but the vestibular system located in the ears can feel it. The movement state of the vehicle body relative to the ground creates a conflict between vision and vestibular nerve perception. Based on the above principles, the key to solving motion sickness is to eliminate or alleviate the conflict between the optic nerve and the vestibular nerve.
  • drug therapy which involves taking drugs that inhibit the vestibular nerve or central nervous system to eliminate or alleviate the conflict between the optic nerve and the vestibular nerve, thereby alleviating the symptoms of motion sickness.
  • drugs include dimenhydrinate, sedatives, etc.
  • drug therapy will have side effects on passengers, such as drowsiness, fatigue and other symptoms, which will affect the user's health.
  • the wristband compresses the Guannei point to suppress the vestibular nerves. Through the function of meridian, it can eliminate or alleviate the conflict between the optic nerve and the vestibular nerve, thereby alleviating the symptoms of motion sickness.
  • passengers need to wear this wristband, which is not very comfortable to wear, not user-friendly enough, and has a low user experience.
  • Anti-motion sickness glasses can be designed with four rings, two in the front and one on each side. Each ring has a blue ring inside. Liquid, liquid can shake, and the acceleration and deceleration changes when the car is driving can be expressed through the liquid level. After passengers wear anti-motion sickness glasses, their eyes can feel the movement of the blue liquid, which can slow down the conflict between the optic nerve and the vestibular nerve, thus solving the problem of motion sickness.
  • AR augmented reality
  • VR virtual reality
  • the principle is to draw a level or other things that can reflect the passengers themselves in the display screen of the AR glasses or VR glasses. Images in motion.
  • the passenger's own motion status can be obtained from the sensors (such as gyroscopes, accelerometers, etc.) built into AR glasses or VR glasses.
  • this application provides an anti-motion sickness method based on visual compensation images.
  • the visual compensation images can reflect the real-time motion status of the vehicle relative to the road (or the ground).
  • the visual compensation images Includes road and lane lines.
  • This visual compensation image is superimposed on the display screen of the vehicle infotainment system for display. The user can see his or her own motion relative to the ground on the display screen, thereby alleviating or eliminating the conflict between the visual nerve and the vestibular nerve, thus solving the problem. Got the problem of motion sickness.
  • the user does not need to wear other special glasses, which is user-friendly and improves the user experience.
  • FIG. 1 is a schematic diagram of a communication system architecture suitable for embodiments of the present application.
  • the communication system includes: a driving recorder and motion sensors (such as gyroscopes and accelerometers) located inside the vehicle. ), as well as an in-vehicle computing platform and an in-vehicle entertainment information system.
  • the in-car computing platform can be understood as the processor in the car.
  • the in-vehicle computing platform may include: a cockpit domain controller (CDC), an electronic control unit (ECU) on the vehicle, a trip computer, a vehicle-mounted computer or a vehicle-mounted T- BOX, etc., the embodiments of this application are not limited here.
  • the in-vehicle entertainment information system may include a display screen.
  • the display screen may be set on the back of the front seat. Rear passengers can use the in-vehicle entertainment information system to watch videos, pictures, browse the web, etc.
  • the in-vehicle computing platform, the in-vehicle entertainment information system, and the driving recorder are connected by communication and can transmit data to each other.
  • the in-vehicle entertainment information system and the driving recorder are respectively connected to the in-vehicle computing platform through data lines (that is, in a wired manner).
  • the in-vehicle entertainment information system and the driving recorder can also be connected to the in-vehicle computing platform through wireless connections (such as Bluetooth, wireless fidelity (Wi-Fi) network, Near field communication technology (near field communication, NFC), infrared technology (infrared, IR, etc.) are used for communication connection, and the embodiments of the present application are not limited here.
  • wireless connections such as Bluetooth, wireless fidelity (Wi-Fi) network, Near field communication technology (near field communication, NFC), infrared technology (infrared, IR, etc.
  • the motion sensors provided in the vehicle include, for example, gyroscopes and accelerometers.
  • gyroscopes and/or accelerometers can be installed in an in-car infotainment system or in an in-car computing platform. platform, or installed at other locations in the car, which is not limited in the embodiments of the present application.
  • These motion sensors can detect the motion parameters of the vehicle relative to the ground, and the in-car computing platform or in-vehicle entertainment information system can obtain and process these motion parameters.
  • system architecture illustrated in Figure 1 does not constitute a specific limitation on the communication system architecture applicable to the examples of this application.
  • system architecture applicable to the example of the present application may include more or fewer components than those shown in Figure 1 , or different components, etc., and the embodiments of the present application are not limited here.
  • embodiments of the present application can also be applied to other vehicles including ships, airplanes, etc., thereby helping passengers alleviate or eliminate seasickness and airsickness problems when taking these vehicles.
  • the embodiments of the present application can also be applied to systems including terminal devices used by passengers in the car and motion sensors installed in the car.
  • the embodiments of the present application are not limited here.
  • the components shown in Figure 1 may be implemented in hardware, software, or a combination of software and hardware.
  • Figure 2 shows a schematic flow chart of an example of the anti-motion sickness method based on visual compensation images provided by this application.
  • the method shown in Figure 2 can be applied in the communication system shown in Figure 1.
  • a scenario including an in-vehicle computing platform and an in-vehicle entertainment information system is used as an example for illustration, but this should not limit the embodiments of the present application.
  • the execution subject of the following methods S210, S220 and S230 may also be a terminal device used by passengers in the car.
  • the method includes: S210 to S230.
  • the in-vehicle computing platform generates a visual compensation image in real time.
  • the visual compensation image reflects the real-time motion state of the vehicle relative to the road from the perspective of the passengers in the vehicle.
  • the in-vehicle computing platform can process the images captured by the driving recorder in real time to generate a visual compensation image.
  • the visual compensation image is from the perspective of the passengers in the vehicle.
  • the visual compensation image can reflect the real-time motion status of the vehicle relative to the road (or surrounding environment).
  • the in-vehicle computing platform can also use the data detected in real time by motion sensors such as gyroscopes and accelerometers in the car to process the preset road model image to generate visual compensation.
  • the visual compensation image is a visual compensation image from the perspective of the passengers in the car, which can reflect the real-time motion status of the vehicle relative to the road (or the surrounding environment).
  • an example of an in-vehicle computing platform provided by the embodiment of the present application is used to generate visual compensation using images captured in real time by a driving recorder.
  • the method includes S210a to S212a:
  • the driving recorder captures images in real time.
  • a driving recorder installed in the car can capture images of the road in front of the car and the surrounding environment in real time while the car is driving.
  • the captured image may include road information and lane information on which the car is currently traveling.
  • the content in the captured image can represent the movement of the environment outside the vehicle (for example, including roads, lane lines, etc.) seen by the passengers in the vehicle, that is, it is an image from the perspective of the passengers in the vehicle.
  • the in-vehicle computing platform After acquiring the captured image, the in-vehicle computing platform detects the road part F and lane line L in the captured image.
  • the driving recorder vehicle can send the captured images to the in-vehicle computing platform in real time through a controller area network (CAN) bus, data line, or wireless communication.
  • CAN controller area network
  • the in-vehicle computing platform After the in-vehicle computing platform obtains the image, it can use the artificial neural network algorithm to detect the road part F and lane line L in the captured image. This can avoid interference from other backgrounds in the captured image and have better visualization effects.
  • the in-vehicle computing platform can perform road and lane line detection using the Mask R-CNN algorithm to determine the road portion F and lane line L in the image. It should be understood that in other embodiments of the present application, the in-vehicle computing platform can also use other algorithms to determine the road portion F and the lane line L in the image, which are not limited in the embodiments of the present application.
  • the in-vehicle computing platform fills the road part F and the lane line L in the captured image with different colors to obtain a visual compensation image from the user's perspective.
  • the in-vehicle computing platform detects the road part F in the captured image in real time. After adding the lane line L, an image including the road part F and the lane line L can be obtained.
  • the in-vehicle computing platform can also perform pixel conversion on the image to obtain a binary image, that is, a binary image corresponding to the captured image is obtained.
  • a binary image means that each pixel on the image has only two possible values or grayscale states. That is to say, the grayscale value of any pixel in the image is 0 or 255. Representing black (0) and white (255) respectively, the entire image presents a visual effect of only black and white.
  • the binary image includes the road part F and the lane line L.
  • the grayscale values of the pixels on the road part F and the lane line L are both 255. Except for the road part F and the lane line L, The gray value of other pixels is 0.
  • a pixel's gray value of 255 means that the pixel is white
  • a pixel's gray value of 0 means that the pixel is black.
  • the road part F and the lane line L are both white, and the area other than the road part F and the lane line L is black.
  • the following formula (1) can be used to fill the pixels in the binary image with colors:
  • p represents a certain pixel in the binary image
  • C represents the visual compensation image
  • C(p) represents the color of the pixel p in the visual compensation image.
  • L(p) represents the grayscale of the pixel point p on the lane line in the binary image.
  • F(p) represents the grayscale of the pixel p of the road part F in the binary image.
  • the binary The gray value of the pixel p in the road part of the value image is 255.
  • Grayscale is white.
  • the color of pixel p in the visual compensation image is represented by the values of pixel p on the four channels of red (R), green (G), blue (B) and alpha.
  • (255, 255, 255, 255) respectively represent the values of pixel p in the visual compensation image on the four channels of red (R), green (G), blue (B) and alpha
  • the values on the R, G, and B channels are all 255, and the value on the alpha channel is also 255.
  • a value of 255 on the alpha channel indicates that the pixel is completely opaque
  • a value of 0 on the alpha channel indicates that the pixel is completely transparent
  • a value on the alpha channel ranges from 0 to Between 255 means that the pixel is semi-transparent.
  • the pixel point p is located at another position outside the road part F and the lane line L, that is, the grayscale of the pixel point p is black (pixel point p The gray value of is 0), then in the process of filling the pixel p, the values on the R, G, and B channels are all 0, and the value on the alpha channel is also 0, indicating that the pixel p Completely transparent in visually compensated images.
  • the road part F and the lane line L are filled with non-transparent, and other areas except the road part and lane line are filled with transparent, we get
  • the visual compensation image can avoid the interference of other areas in the picture captured by the driving recorder (that is, the area other than the road part and the lane line) on the image, that is, it can avoid the interference of the road background in the visual compensation image, and has better performance.
  • the visualization effect can improve the accuracy of the visual compensation image on the basis of solving the motion sickness of passengers.
  • a formula different from formula (1) can also be used to fill the binary image with different colors.
  • the middle road part F and the lane line L are filled with different colors respectively.
  • the following formula (2) can also be used to fill the pair of road part F and lane line L in the binary image with different colors:
  • the values of pixel p on the R, G, and B channels are respectively R 1 , G 1 , B 1 , the gray value of pixel p on the alpha channel is alpha 1 , the values of R 1 , G 1 , and B 1 can be different, and the value of alpha 1 can also be other values other than 0. .
  • the color of the combination of R 1 , G 1 , and B 1 is different from the color of the combination of R 2 , G 2 , and B 2 , that is, the road portion F and the lane line L are different colors.
  • the values of alpha 2 and alpha 1 can be the same or different.
  • the pixel point p In the process of filling the color of the pixels in the binary image using formula (2), if the pixel point p is located at other locations outside the road part and the lane line, then in the process of filling the color of the pixel point p, the pixel point p
  • the values on the R, G, and B channels are R 3 , G 3 , and B 3 respectively.
  • the gray value of the pixel p on the alpha channel can be 0 (that is, completely transparent).
  • R The values of 3 , G 3 and B 3 can be different. It can be understood that the gray value of pixel p on the alpha channel may not be 0, and this application does not limit this.
  • the road part F and the lane line L in the binary image are filled with different colors to obtain a visual compensation image. Passengers can distinguish the road part from the obtained visual compensation image. F and lane line L, and perceives its real-time motion status relative to the road part F and lane line L. Since the pixels in the binary image are filled with color, the complexity and calculation amount of filling the pixels with color can be reduced, making it easy to implement. Moreover, other areas except the road part and lane lines are filled with transparency, which avoids the interference of the road background in the image and has better visualization effect.
  • Figure 4 shows a schematic diagram of a visual compensation image generated by an in-vehicle computing platform provided by this application using images captured in real time by a driving recorder.
  • Figure a in Figure 4 shows an image captured by a driving recorder.
  • Figure b in Figure 4 shows the visual compensation image obtained by filling the road part F and lane line L with different colors in the binary image after the in-vehicle computing platform performs pixel conversion on the image to obtain a binary image.
  • the binary image only includes the road Part F and lane line L, and the colors of the two are different.
  • Other parts shown in picture a in Figure 4 (such as the distant sky, grass on the roadside, etc.) are all transparent and will not be displayed in the visual compensation image shown in picture b in Figure 4.
  • the in-vehicle computing platform detects the road part F in the captured image. After adding the lane line L, an image including the road part F and the lane line L can be obtained.
  • the in-vehicle computing platform can also directly fill the road part F and lane line L in the image with different colors to obtain a visual compensation image, without further converting the image into a binary image.
  • the following formula (3) can be used to fill the pixels in the captured image with colors:
  • the values of the pixel p on the R, G, and B channels are R 1 , G 1 , and B respectively.
  • the grayscale value of pixel p on the alpha channel is alpha 1.
  • the values of R 1 , G 1 , and B 1 can be different, and the value of alpha 1 can also be other values besides 0.
  • the values of pixel p on the R, G, and B channels are R 2 , G 2 , and B 2 respectively.
  • the grayscale value of pixel p on the alpha channel is alpha 2
  • the values of R 2 , G 2 , and B 2 are Can be different, alpha 2 can also have a value other than 0.
  • the color of the combination of R 1 , G 1 , and B 1 is different from the color of the combination of R 2 , G 2 , and B 2 , that is, the road portion F and the lane line L are different colors.
  • the values of alpha 2 and alpha 1 can be the same or different.
  • the pixel point p is in The values on the R, G, and B channels are R 3 , G 3 , and B 3 respectively.
  • the gray value of the pixel p on the alpha channel is 0 (that is, it is completely transparent).
  • the values of R 3 , G 3 , and B 3 Can be different. It can be understood that the gray value of pixel p on the alpha channel may not be 0, and this application does not limit this.
  • a visual compensation image is obtained.
  • the passenger can distinguish the road part F and the lane line L from the visual compensation image, and perceive his/her real-time motion status relative to the road part F and the lane line L. Filling other areas except the road part and lane lines as transparent avoids the interference of the road background in the image and has better visualization effect.
  • the in-vehicle computing platform when the in-vehicle computing platform fills the road portion F and the lane line L in the captured image with different colors, other different formulas or methods can also be used to fill the captured image with different colors.
  • the road part F and the lane line L are filled with different colors respectively. As long as the other areas except the road part and the lane line are filled with transparent or nearly transparent, the road part F and the lane line L are filled with different colors. The color is sufficient, and the embodiments of this application are not limited here.
  • S210a to S212a are all real-time processing processes, that is, the in-vehicle computing platform can generate visual compensation images in real time.
  • an example of an in-vehicle computing platform uses motion sensors such as gyroscopes and accelerometers. Real-time detected data is used to process the preset road model image and generate a schematic flow chart of a visual compensation image, as shown in Figure 5.
  • the method includes: S210b to S213b.
  • the gyroscope and accelerometer in the car detect the vehicle's motion parameters in real time.
  • gyroscopes and accelerometers installed in the car can detect the motion parameters of the vehicle in real time while the car is driving.
  • the motion parameters may include motion parameters of the vehicle body on the X-axis, Y-axis, Z-axis, etc. during the movement of the vehicle, which are not limited in the embodiments of the present application.
  • the in-vehicle computing platform obtains the motion parameters detected by the gyroscope and accelerometer in real time and processes them to obtain the three-dimensional rotation vector and the three-dimensional translation vector of the vehicle motion.
  • the gyroscope and accelerometer can send real-time detected motion data to the in-vehicle computing platform through CAN bus, data lines, or wireless communication.
  • the in-vehicle computing platform After the in-vehicle computing platform obtains the motion parameters detected by the gyroscope and accelerometer in real time, it can The parameters are processed to obtain the rotational angular velocity and motion acceleration of the vehicle on the X-axis, Y-axis, and Z-axis. The rotational angular velocity and motion acceleration are filtered and time-integrated to obtain the three-dimensional rotation vector (r x , ry , r z ). and three-dimensional translation vectors (t x , t y , t z ).
  • the schematic diagram of the X-axis, Y-axis, and Z-axis of the vehicle is as shown in Figure 6.
  • the positive direction of the X-axis of the vehicle can be the body of the vehicle.
  • the positive direction of the Y-axis of the vehicle can be the forward direction of the vehicle (that is, the forward direction of the vehicle when driving the vehicle)
  • the positive direction of the Z-axis can be directly above the vehicle body (that is, vertical on the plane where the X-axis and Y-axis are located and pointing towards the top of the vehicle).
  • the three-dimensional rotation vector (r x , ry , r z ) and the three-dimensional translation vector (t x , ty , t z ) can represent the real-time motion state of the vehicle.
  • the in-vehicle computing platform performs cyclic movement deformation and bending deformation on the road texture in the preset road model based on the three-dimensional rotation vector and the three-dimensional translation vector of the vehicle to obtain the processed road texture.
  • the road model can be pre-stored in the in-vehicle computing platform (or the road model can also be pre-stored in the in-vehicle entertainment information system).
  • Road models can be formed through plane modeling.
  • the road model includes a road texture.
  • the road texture includes a road part F and a lane line L.
  • the lane line L includes solid lines and dotted lines of a certain width.
  • the road part F and the lane line L have different colors. Passengers can intuitively distinguish the dividing line or edge between the road part F and the lane line L in the road model, so that the passengers can distinguish the road part F and the lane line L.
  • the texture coordinate system includes a u-axis and a v-axis, where the u-axis is a direction perpendicular to the lane line L and the v-axis is a direction parallel to the lane line L.
  • Figure 7 shows a schematic diagram of a road model.
  • the road model includes a road texture.
  • the road texture includes a road part F and a lane line L.
  • the road part F and the lane line L have different colors.
  • There is a texture coordinate system in the road texture and the texture coordinate system includes the u-axis and the v-axis.
  • u' represents the value of the u-axis after the road texture produces the corresponding curvature r z
  • v' represents the value of the v-axis after the road texture produces the corresponding curvature r z
  • u represents the road texture produces the corresponding curvature r z
  • the value of the u-axis before bending, v represents the value of the v-axis before the road texture produces corresponding curvature r z
  • k is a parameter that controls the steering angle and texture curvature.
  • the value range of k may be 0.05 ⁇ k ⁇ 0.20.
  • the value of k can be 0.1234134.
  • the deformation of the road texture can reflect the movement state of the vehicle around the Z-axis during real-time movement. For example, the movement state of the vehicle when turning left and right can be reflected from the deformation of the road texture.
  • the ty in the three-dimensional translation vector is used to generate the road texture.
  • the cyclic movement corresponding to the speed t y is to apply cyclic movement deformation to the road texture.
  • the following formula (5) can be used to cause the road texture to produce a cyclic movement corresponding to the speed ty :
  • u' represents the value of the u-axis after the road texture produces a cyclic movement corresponding to the speed t y
  • v' represents the value of the v-axis after the road texture produces a cyclic movement corresponding to the speed t y
  • u represents the road texture produces The value of the u-axis before the cyclic movement corresponding to the speed ty y
  • v represents the value of the v-axis before the cyclic movement corresponding to the speed t y is generated by the road texture.
  • s is a parameter that controls the texture movement speed.
  • the value of s is related to the physical length of the texture.
  • the length of the middle section of the virtual lane line is 4m
  • the length of the blank section (i.e. the road part) between the two sections of the virtual lane line is 6m
  • the total length of the lane line The length is 30m, assuming the vehicle speed is 60km/h, and the sampling interval of the three-dimensional translation vector is 100ms.
  • the deformation of the road texture can reflect the movement state of the vehicle around the Y-axis during real-time movement.
  • the bumpy state of the vehicle on a rough road can be determined from the deformation of the road texture. Reflected.
  • the processed road texture After applying cyclic movement deformation and bending deformation to the road texture, the processed road texture can be obtained. Since the above processing is real-time, the processed road texture can reflect the real-time motion status of the vehicle. That is to say, in the processed road texture, the shape, position, etc. of the road part F and the lane line L change in real time with the movement of the vehicle. The changes in the road part F and the lane line L in the road texture can reflect the vehicle Real-time motion relative to road sections and lane lines.
  • Figure 8 shows an example of a schematic diagram before and after applying cyclic movement deformation and bending deformation to the road texture.
  • Road portion F and lane line L are stationary.
  • the position and shape of the road part F and the lane line L will change with the motion state of the vehicle.
  • S213b The in-vehicle computing platform sets the processed road texture on the road model, applies rigid body transformation and perspective transformation to the road model, and obtains a visual compensation image.
  • the in-vehicle computing platform can set the processed road texture on the road model, for example, paste it on the road model, and then apply
  • For rigid body transformation for example, the vector (r x , ry , t z ) can be used to apply rigid body transformation to the road model, and then a perspective transformation can be applied to the road model after the rigid body transformation is completed, and a visual compensation image can be obtained. After perspective transformation, the visual compensation image becomes the visual compensation image from the perspective of the passengers in the car. For example, FIG.
  • FIG. 9 is a schematic diagram of an example of a visually compensated image from the perspective of a passenger in a vehicle obtained by applying perspective transformation to a road model.
  • the visual compensation map can represent the movement outside the vehicle seen by the passengers in the vehicle, that is, it reflects the real-time movement status of the vehicle relative to the road from the perspective of the passengers in the vehicle.
  • S210b to S213b are all real-time processing processes, that is, the in-vehicle computing platform can generate the visual compensation image in real time.
  • the in-vehicle computing platform can also perform real-time processing through other methods.
  • the visual compensation image can reflect the real-time motion state of the vehicle relative to the road from the perspective of the passengers in the vehicle, the embodiments of the present application are not limited here.
  • the visual compensation image generated in real time by the in-vehicle computing platform is used as an example.
  • it can also be captured in real time by the in-vehicle entertainment information system using a driving recorder.
  • the image is processed to generate a visually compensated image.
  • the in-vehicle entertainment information system uses the data detected in real time by motion sensors such as gyroscopes and accelerometers in the car to process the preset road model image to generate a visual compensation image.
  • the terminal device used by the passengers in the car can process the images captured in real time by the driving recorder to generate visual compensation images, or use the motion sensors such as gyroscopes and accelerometers in the car to process them in real time.
  • the detected data is used to process the preset road model image to generate a visual compensation image.
  • the in-vehicle computing platform sends the visual compensation image to the in-vehicle entertainment information system.
  • the in-vehicle computing platform can send the obtained visual compensation image C to the in-vehicle entertainment information system in real time through CAN bus, data line, or wireless communication.
  • the in-vehicle computing platform can also crop or scale the visual compensation image C according to the size of the display screen in the vehicle entertainment information system, etc., and then send the cropped or scaled visual compensation image C to the vehicle. Entertainment information system.
  • the method may not include S220.
  • S220 may be replaced by: the terminal device sends the visual compensation image to the in-vehicle entertainment information system.
  • the method may not include S220.
  • the in-vehicle entertainment information system displays the visual compensation image on the display screen.
  • the visual compensation image may be displayed as an overlay on the display screen.
  • the display screen also displays the videos, pictures, and web browsing interfaces that the passengers watch (called Main interface). Therefore, in order to reduce or eliminate the impact of the visual compensation image on the main interface for passengers to browse, when the visual compensation image is superimposed and displayed on the display screen, the transparency of the visual compensation image can be adjusted, so that the visual compensation image can be used to stimulate the user's vision.
  • the impact of the visual compensation image on the main interface displayed on the display screen is reduced or eliminated. That is, on the basis of alleviating or eliminating the conflict between the optic nerve and the vestibular nerve, the impact on the user's use of the in-vehicle entertainment information system is reduced, thereby further improving the user experience.
  • C represents the visual compensation image
  • I represents the main interface displayed on the display screen
  • is the transparency parameter during superimposed display
  • the value range of ⁇ is less than 1 and greater than
  • A represents the final display on Main interface and visual compensation images on the display.
  • the final display effect can be adjusted by adjusting the size of ⁇ .
  • the value of ⁇ The larger it is, the more transparent it is. The greater the transparency of the visual compensation image C, and the smaller the impact of the visual compensation image C on the main interface. Passengers can see the main interface covered by the visual compensation image C through the visual compensation image C. part. The smaller the value of ⁇ , the more opaque it is, and the visual compensation image C will cover part of the main interface.
  • the value of ⁇ may be pre-configured according to the specific conditions of the display screen (such as the resolution of the display screen, screen size, etc.), external light, etc.
  • the values of ⁇ configured during the day time period and the night time period may be different, and the values of ⁇ configured for different types or brands of vehicles may also be different.
  • the value of ⁇ can also be set by the user according to his or her own needs. Furthermore, the user can also update the value of ⁇ .
  • Figure 10 is a schematic diagram of an interface displayed after the visual compensation image is superimposed on the display screen.
  • the visual compensation image C is transparent, and the road portions F and Lane lines L are of different colors. Passengers can see the part of the main interface covered by the visual compensation image C through the visual compensation image C.
  • the in-vehicle entertainment information system automatically adjusting (for example, using the above formula (6)) the transparency of the visual compensation image C
  • the user can use this control to manually adjust the transparency of the visual compensation image C according to their own needs. This can further meet the user's needs, and the user can adjust the transparency of the visual compensation image C in real time as needed to improve the user experience.
  • the visual compensation image C can also be set on the main interface in the form of a floating window, and the user can adjust the position of the visual compensation image C in the main interface by hand. For example, you can use your finger to drag the visual compensation image C and move it on the display screen, thereby dragging the visual compensation image C to an appropriate position, which can further improve the user experience.
  • the visual compensation image C when the visual compensation image C is displayed on the main interface in the form of a floating window, it may not be necessary to adjust the transparency of the visual compensation image C. In this case, the visual compensation image C may be opaque. After the entertainment information system obtains the visual compensation image C, it can be displayed directly on the main interface in the form of a floating window without further processing (for example, there is no need to automatically adjust or manually adjust the transparency of the visual compensation image C).
  • the visual compensation image C when the visual compensation image C is displayed on the main interface in the form of a floating window, automatic adjustment or manual adjustment can also be used to adjust the transparency of the visual compensation image C.
  • the visual compensation image C and the main interface can also be displayed in a split-screen manner on the display screen of the vehicle entertainment information system.
  • the visual compensation image C can be opaque, and the vehicle entertainment information After the system obtains the visual compensation image C, it can be directly displayed on the display screen in a split-screen manner through the main interface without further processing (for example, there is no need to automatically adjust or manually adjust the transparency of the visual compensation image C), thus It can reduce the amount of calculation and processing complexity and save computing resources.
  • the visual compensation image C can also be displayed on the Dock of the display screen in the form of a small icon (Dock can be understood as a row of "all applications" icons displayed below the display screen), which can be clicked when the user gets motion sick.
  • the visual compensation image C corresponds to a small icon, so that the visual compensation image C can be opened on the display screen.
  • the visual compensation image C can be displayed on the main interface in the form of a floating window, or displayed on the display screen in a split-screen manner with the main interface, or it can also be displayed through
  • the superimposed display method shown in Figure 10 is displayed on the display screen.
  • the transparency of visually compensated image C can also be passed Adjust by automatic adjustment or manual adjustment.
  • the visual compensation image and the main interface can also be displayed on the display screen in other ways.
  • the embodiments of the present application are not limited here.
  • the vehicle entertainment information system displays the visual compensation image on the display screen as an example.
  • the in-vehicle computing platform may also control the display of the visual compensation image.
  • the execution subject of the above-mentioned S230 may also be the in-vehicle entertainment information system. That is to say, the execution subject of the above-mentioned S210 to S230 can be an in-vehicle computing platform, or it can also be an in-vehicle entertainment information system, or it can also be a system including an in-vehicle entertainment information system and an in-vehicle computing platform.
  • the embodiment of the present application is This is not a limitation.
  • the execution subject of the above-mentioned S210 to S230 may also be a terminal device (such as a mobile phone, tablet computer, etc.) used by passengers in the car.
  • the terminal device after the terminal device generates a visual compensation image, the visual compensation image may also be The compensation image is displayed on the display screen of the terminal device for passengers to view. That is, S230 can also be replaced by: the terminal device displays the visual compensation image on the display screen.
  • the embodiments of the present application are not limited here.
  • the visual compensation image and the main interface are displayed on the display screen, so that the visual compensation image can be used to stimulate the user's vision while reducing or eliminating the impact of the visual compensation image on the display screen.
  • the impact of the main interface That is, on the basis of alleviating or eliminating motion sickness symptoms, the impact on users' use of in-car entertainment information systems is reduced, thereby further improving user experience.
  • the anti-motion sickness method based on visual compensation images generates visual compensation images in real time.
  • the visual compensation images can reflect the real-time motion status of the vehicle relative to the road.
  • the visual compensation images include roads and lane lines. Road and lane lines are each a different color.
  • This visual compensation image is superimposed on the display screen of the vehicle infotainment system for display. The user can see his or her own motion relative to the ground on the display screen, thereby alleviating or eliminating the conflict between the visual nerve and the vestibular nerve, thus solving the problem. Got the problem of motion sickness.
  • the visual compensation image other areas except the road part and lane lines are transparent, which can avoid the interference of the road background in the visual compensation image, has a better visualization effect, and can solve the motion sickness of passengers. , improve the accuracy of the visual compensation image.
  • the transparency of the visual compensation image can be adjusted, and the visual compensation image can be used to stimulate the user's vision while reducing or eliminating the impact of the visual compensation image on the main interface displayed on the display screen, thereby Further improve user experience.
  • predefinition or “preset” can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in the device (for which this application specifically The implementation method is not limited.
  • This embodiment can divide each device (such as an in-vehicle computing platform, an in-vehicle entertainment information system, a terminal device used by in-vehicle passengers, etc.) into functional modules according to the above method.
  • each function can be divided into functional modules, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • Embodiments of the present application also provide an anti-motion sickness system based on visual compensation images.
  • the system includes: an in-vehicle computing platform and an in-vehicle entertainment information system.
  • the system may also include: a driving recorder and/or a motion sensor, for example, as shown in Figure 1.
  • the vehicle entertainment information system includes a display screen, which can be set on the back of the front seat. Rear passengers can use the vehicle entertainment information system to watch videos, pictures, browse the web, etc.
  • the in-vehicle computing platform can include: CDC, vehicle ECU, driving computer, vehicle-mounted computer or vehicle-mounted T-BOX, etc.
  • Motion sensors may include, for example, gyroscopes and accelerometers.
  • the motion sensor can be set in the vehicle entertainment information system, or in the in-car computing platform, or in other locations in the car.
  • An embodiment of the present application also provides an anti-motion sickness system based on visual compensation images.
  • the system includes: a terminal device used by passengers in the car, and the terminal device has a display screen.
  • the system can also include: driving recorder and/or motion sensor.
  • the anti-motion sickness system based on the visual compensation image provided by the embodiment of the present application is used to perform the above anti-motion sickness method based on the visual compensation image, and therefore can achieve the same effect as the above implementation method.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the anti-motion sickness system based on visual compensation images.
  • the anti-motion sickness system based on visual compensation images may include more or less components, or combine some components, or split some components, or arrange different components.
  • Embodiments of the present application also provide an anti-motion sickness device based on visual compensation images.
  • the device may be, for example, an in-vehicle computing platform, an in-vehicle entertainment information system, or a terminal device used by passengers in the vehicle.
  • the device may include a processing module, a storage module and a communication module.
  • the processing module can be used to control and manage the actions of the device. For example, it can be used to support the device to perform steps performed by the processing unit.
  • Storage modules can be used to support storage of program code and data, etc.
  • the communication module can be used to support communication between the device and other devices (such as motion sensors, driving recorders, etc.).
  • the processing module may be a processor or a controller. It may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with this disclosure.
  • a processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, digital signal processing (DSP) And the combination of microprocessor and so on.
  • the storage module may be a memory.
  • the communication module can specifically be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other devices that interact with other electronic devices.
  • FIG. 11 shows a schematic hardware structure diagram of an anti-motion sickness device 500 based on visual compensation images provided by this application.
  • the device 500 can be used by the above-mentioned in-vehicle computing platform, in-vehicle entertainment information system, or in-vehicle passengers. terminal equipment.
  • the device 500 may include a processor 510, an external memory interface 520, an internal memory 521, a universal serial bus (USB) interface 530, a charging management module 540, a power management module 541, and a battery 542 , wireless communication module 550, etc.
  • USB universal serial bus
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the device 500 .
  • the device 500 may include more or fewer components than shown in the figures, or some components may be combined, or some components may be separated, or may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the device 500 when the device 500 is a vehicle-mounted entertainment information system or a terminal device, the device may also include a display screen.
  • Processor 510 may include one or more processing units.
  • the processor 510 may include an application processor (application processor, AP), modem processor, graphics processing unit (GPU), image signal processor (image signal processor, ISP), controller, video Codec, digital signal processor (DSP), baseband processor, and/or neural network processing unit (NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • DSP digital signal processor
  • NPU neural network processing unit
  • different processing units can be independent components or integrated in one or more processors.
  • apparatus 500 may also include one or more processors 510.
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • processor 510 may include one or more interfaces.
  • the interface can include an inter-integrated circuit (I2C) interface, an inter-integrated circuit audio (integrated circuit sound, I2S) interface, a pulse code modulation (PCM) interface, and a universal asynchronous receiver (universal asynchronous receiver) /transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM card interface, and/or USB interface, etc.
  • the USB interface 530 is an interface that complies with the USB standard specification, and can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 530 can be used to connect a charger to charge the device 500, and can also be used to transmit data between the device 500 and peripheral devices.
  • the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute a structural limitation on the device 500 .
  • the device 500 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the wireless communication module 550 can provide applications on the device 500 including Wi-Fi (including Wi-Fi sensing and Wi-Fi AP), Bluetooth (bluetooth, BT), wireless data transmission module (for example, 433MHz, 868MHz, 515MHz), etc. Wireless communication solutions.
  • the wireless communication module 550 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 550 receives electromagnetic waves via antenna 1 or antenna 2 (or antenna 1 and antenna 2), filters and frequency modulates the electromagnetic wave signals, and sends the processed signals to the processor 510.
  • the wireless communication module 550 can also receive the signal to be sent from the processor 510, frequency modulate it, amplify it, and convert it into electromagnetic waves for radiation.
  • the external memory interface 520 can be used to connect an external memory card, such as a Micro SD card, to implement expansion devices. 500 storage capacity.
  • the external memory card communicates with the processor 510 through the external memory interface 520 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 521 may be used to store one or more computer programs including instructions.
  • the processor 510 can execute the above instructions stored in the internal memory 521 to cause the device 500 to execute the advertisement acquisition method provided in some embodiments of the present application, as well as various applications and data processing.
  • Internal memory 521 may include code storage areas and data storage areas. Among them, the code storage area can store the operating system. The data storage area may store data created during use of the device 500, etc.
  • the internal memory 521 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage components, flash memory components, universal flash storage (UFS), etc.
  • the processor 510 can cause the device 500 to execute the advertisement provided in the embodiment of the present application by executing instructions stored in the internal memory 521 and/or instructions stored in the memory provided in the processor 510 Obtaining methods, as well as other applications and data processing.
  • Figure 12 shows a schematic block diagram of another anti-motion sickness device 600 based on visual compensation images provided by the embodiment of the present application.
  • the device 600 can correspond to the in-vehicle computing platform, in-vehicle entertainment information system described in the above method embodiments, Or terminal equipment used by passengers in the car. It may also be a chip or component applied to an in-vehicle computing platform, an in-vehicle entertainment information system, or a terminal device used by in-vehicle passengers, and each module or unit in the device 600 is used to perform the steps described in the above method embodiments.
  • the device 600 may include: a processing unit 610 and a communication unit 620.
  • the device 600 may also include a storage unit 630.
  • the communication unit 620 may include a receiving unit (module) and a sending unit (module), used to perform the steps of receiving information and sending information by the terminal device or the advertising server in each of the foregoing method embodiments.
  • the storage unit 630 is used to store instructions executed by the processing unit 610 and the communication unit 620.
  • the processing unit 610, the communication unit 620 and the storage unit 630 are communicatively connected.
  • the storage unit 630 stores instructions, the processing unit 610 is used to execute the instructions stored in the storage unit, and the communication unit 620 is used to perform specific signal transmission and reception under the driving of the processing unit 610.
  • the communication unit 620 may be a transceiver, an input/output interface or an interface circuit, etc., and may be implemented by the wireless communication module 550 in the embodiment shown in FIG. 11 , for example.
  • the storage unit may be a memory, for example, it may be implemented by the external memory interface 520 and the internal memory 521 in the embodiment shown in FIG. 11 .
  • the processing unit 610 may be implemented by the processor 510 in the embodiment shown in FIG. 11, or may be implemented by the processor 510, an external memory interface 520, and an internal memory 521.
  • each unit in the above device can be fully or partially integrated into a physical entity, or they can also be physically separated.
  • the units in the device can all be implemented in the form of software calling through processing elements; they can also all be implemented in the form of hardware; some units can also be implemented in software through processing elements. It is implemented in the form of management component calls, and some units are implemented in the form of hardware.
  • each unit can be a separate processing element, or it can be integrated and implemented in a certain chip of the device.
  • it can also be stored in the memory in the form of a program, and a certain processing element of the device can call and execute the unit. Function.
  • the processing element here can also be called a processor, and can be an integrated circuit with signal processing capabilities.
  • each step of the above method or each unit above can be implemented by an integrated logic circuit of hardware in the processor element or implemented in the form of software calling through the processing element.
  • the unit in any of the above devices may be one or more integrated circuits configured to implement the above method, such as: one or more application specific integrated circuits (ASICs), or one or Multiple digital signal processors (DSPs), or one or more field programmable gate arrays (FPGAs), or a combination of at least two of these integrated circuit forms.
  • ASICs application specific integrated circuits
  • DSPs Multiple digital signal processors
  • FPGAs field programmable gate arrays
  • the processing element can be a general processor, such as a central processing unit (CPU) or other processor that can call a program.
  • CPU central processing unit
  • these units can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • the chip system includes at least one processor 710 and at least one interface circuit 720 .
  • the processor 710 and the interface circuit 720 may be interconnected by wires.
  • interface circuitry 720 may be used to receive signals from other devices, such as an in-vehicle computing platform or an in-vehicle infotainment system.
  • interface circuitry 720 may be used to send signals to other devices (eg, processor 710).
  • the interface circuit 720 can read instructions stored in the memory and send the instructions to the processor 710 .
  • the chip system can be caused to perform various steps performed by the in-vehicle computing platform or the in-vehicle entertainment information system in the above embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiments of this application.
  • each unit in the above device can be fully or partially integrated into a physical entity, or they can also be physically separated.
  • the units in the device can all be implemented in the form of software calling through processing components; they can also all be implemented in the form of hardware; some units can also be implemented in the form of software calling through processing components, and some units can be implemented in the form of hardware.
  • each unit can be a separate processing element, or it can be integrated and implemented in a certain chip of the device.
  • it can also be stored in the memory in the form of a program, and a certain processing element of the device can call and execute the unit. Function.
  • the processing element here can also be called a processor, and can be an integrated circuit with signal processing capabilities.
  • each step of the above method or each unit above can be implemented by an integrated logic circuit of hardware in the processor element or implemented in the form of software calling through the processing element.
  • the unit in any of the above devices may be one or more integrated circuits configured to implement the above method, such as: one or more application specific integrated circuits (ASIC), or one or Multiple digital signal processors (DSPs), or one or more field programmable gate arrays (FPGAs), or a combination of at least two of these integrated circuit forms.
  • ASIC application specific integrated circuits
  • DSPs Multiple digital signal processors
  • FPGAs field programmable gate arrays
  • the processing element can be a general processor, such as a central processing unit (Central Processing Unit, CPU) or other processors that can call programs.
  • CPU central processing unit
  • these units can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • Embodiments of the present application also provide a computer-readable storage medium for storing computer program code.
  • the computer program includes a method for executing any of the above-mentioned anti-motion sickness methods based on visual compensation images provided by the embodiments of the present application. instructions.
  • the readable medium may be a read-only memory (ROM) or a random access memory (RAM), which is not limited in the embodiments of the present application.
  • This application also provides a computer program product.
  • the computer program product includes instructions. When the instructions are executed, the in-vehicle computing platform, the anti-motion sickness device based on visual compensation images, the in-vehicle entertainment information system, or the anti-motion sickness device based on visual compensation images are used.
  • the anti-motion sickness system of the image performs operations corresponding to those in the above method.
  • the embodiment of the present application also provides a chip located in a communication device.
  • the chip includes: a processing unit and a communication unit.
  • the processing unit may be, for example, a processor.
  • the communication unit may be, for example, an input/output interface, a pin, or a communication unit. circuit etc.
  • the processing unit can execute computer instructions to cause the communication device to execute any of the above-mentioned anti-motion sickness methods based on visual compensation images provided by the embodiments of the present application.
  • the computer instructions are stored in a storage unit.
  • the storage unit is a storage unit within the chip, such as a register, cache, etc.
  • the storage unit may also be a storage unit located outside the chip within the terminal, such as a ROM or other storage unit that can store static information and instructions. Types of static storage devices, random RAM, etc.
  • the processor mentioned in any of the above places may be a CPU, a microprocessor, an ASIC, or one or more integrated circuits for program execution that control the above-mentioned transmission method of feedback information.
  • the processing unit and the storage unit can be decoupled, respectively installed on different physical devices, and connected through wired or wireless methods to realize the respective functions of the processing unit and the storage unit to support the system chip to implement the above embodiments. various functions in .
  • the processing unit and the memory may be coupled on the same device.
  • An embodiment of the present application also provides a vehicle, which includes the anti-motion sickness device based on visual compensation images, the anti-motion sickness system, chip system or chip based on visual compensation images provided in the above embodiments of the present application.
  • the anti-motion sickness system based on visual compensation images the anti-motion sickness device based on visual compensation images, vehicles, computer-readable storage media, computer program products or chips provided in this embodiment are all used to execute the corresponding methods provided above. , therefore, the beneficial effects it can achieve can be referred to the beneficial effects in the corresponding methods provided above, and will not be described again here.
  • the memory in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
  • the non-volatile memory can be ROM, programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (EPROM) ,EEPROM) or flash memory.
  • Volatile memory can be RAM, which acts as an external cache.
  • RAM random access memory
  • static RAM static random access memory
  • dynamic RAM dynamic random access memory
  • synchronous dynamic random access memory synchronous DRAM, SDRAM
  • double data rate Synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous link dynamic random access memory direct memory bus random access memory Access memory
  • direct rambus RAM direct rambus RAM, DR RAM
  • the methods in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer programs or instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer program or instructions may be stored in or transmitted over a computer-readable storage medium.
  • the computer readable storage medium The content can be any available media that can be accessed by a computer or a data storage device such as a server that integrates one or more available media.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a readable storage medium , including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned readable storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于视觉补偿图像的防晕车方法、装置和系统。通过实时地生成视觉补偿图像,该视觉补偿图像可以反映出车辆相对道路(或者地面)的实时运动状态,该视觉补偿图像中包括道路和车道线。将该视觉补偿图像显示在车载信息娱乐系统的显示屏上进行显示,用户便可在该显示屏上看到自己相对于地面运动状态,从而缓解或者消除视觉神经与前庭神经的冲突,也就解决了晕车的问题。并且,不需要用户佩戴其他特制的眼镜,对用户友好,提高用户体验。

Description

基于视觉补偿图像的防晕车方法、装置和系统
本申请要求于2022年3月16日提交国家知识产权局、申请号为202210261485.4、申请名称为“基于视觉补偿图像的防晕车方法、装置和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及交通工具领域,更为具体的,涉及一种基于视觉补偿图像的防晕车方法、装置和系统。
背景技术
位于车内的乘客(例如后排乘客)产生晕车是因为车内封闭环境阻挡了视线,视觉通道无法给乘客提供有效的运动状态反馈,即无法为乘客提供随车身相对于地面的运动状态,而位于耳朵内的前庭系统却可以感觉到车身相对于地面的运动状态,从而产生视觉与前庭神经感知的冲突,乘客就会有昏厥、恶心、食欲减退等晕车症状出现。
目前,对于解决或者预防晕车而言,通常的做法是药物疗法,但是,药物疗法会对乘客产生副作用。目前可以通过佩戴特殊的防晕车眼镜等来解决或者预防晕车,但是乘客需要佩戴眼镜,佩戴的舒适度不高,对用户不够友好,用户体验较低。
发明内容
本申请提供了一种基于视觉补偿图像的防晕车方法、装置和系统。通过实时地生成视觉补偿图像,该视觉补偿图像可以反映出车辆相对道路(或者地面)的实时运动状态,该视觉补偿图像中包括道路和车道线。将该视觉补偿图像叠加在车载信息娱乐系统的显示屏上进行显示,用户便可在该显示屏上看到自己相对于地面运动状态,从而缓解或者消除视觉神经与前庭神经的冲突,也就解决了晕车的问题。并且,不需要用户佩戴其他特制的眼镜,对用户友好,提高用户体验。
第一方面,提供了一种基于视觉补偿图像的防晕车方法,该方法包括:生成视觉补偿图像,该视觉补偿图像反映车内乘客视角下车辆相对于道路的实时运动状态,该视觉补偿图像中包括道路部分和车道线;将该视觉补偿图像显示在车载娱乐信息系统的显示屏上。
第一方面提供的基于视觉补偿图像的防晕车方法,通过实时地生成视觉补偿图像,该视觉补偿图像可以反映出车辆相对道路的实时运动状态,该视觉补偿图像中包括到道路和车道线,道路和车道线分别为不同的颜色。将该视觉补偿图像叠加在车载信息娱乐系统的显示屏上进行显示,用户便可在该显示屏上看到自己相对于地面运动状态,从而缓解或者消除视觉神经与前庭神经的冲突,也就解决了晕车的问题。
在第一方面一种可能的实现方式中,生成视觉补偿图像,包括:获取车辆内行车记录仪实时拍摄的图像;检测该图像中的道路部分和车道线;将该图像中道路部分和 车道线填充为非透明色,将该图像中道路部分和车道线之外的部分填充为透明色,得到该视觉补偿图像,该视觉补偿图像中道路部分和车道线的颜色不同。在该实现方式中,由于行车记录仪实时拍摄的图像可以反映车内乘客视角下车辆相对于道路(或者周围环境)的实时运动状态,并且,将道路部分和车道线填充为非透明的,将除道路部分和车道线之外的其他区域填充为透明的,得到视觉补偿图像,可以避免行车记录仪拍摄的图中的其他区域(即除过道路部分和车道线之外的区域)对图像的干扰,即避免了视觉补偿图像中道路背景的干扰,具有更好的可视化效果,可以在解决乘客晕车的基础上,提高该视觉补偿图像的准确性。
在第一方面一种可能的实现方式中,生成视觉补偿图像,包括:获取车辆内行车记录仪实时拍摄的图像;检测该图像中的道路部分和车道线;将该图像转换为二值图像,在该二值图像中,该道路部分和该车道线为白色,该道路部分和该车道线之外的部分为黑色;将该二值图像中道路部分和车道线填充为非透明色,将该二值图像中道路部分和车道线之外的部分填充为透明色,得到该视觉补偿图像,该视觉补偿图像中道路部分和车道线的颜色不同。在该实现方式中,由于行车记录仪实时拍摄的图像可以反映车内乘客视角下车辆相对于道路(或者周围环境)的实时运动状态,并且,由于是对二值图像中的像素进行填充颜色,可以降低对像素填充颜色的复杂度和计算量,容易实现。并且,将除道路部分和车道线之外的其他区域填充为透明的,避免了图像中道路背景的干扰,具有更好的可视化效果。
在第一方面一种可能的实现方式中,生成视觉补偿图像,包括:获取车辆内陀螺仪和加速度计实时检测的运动参数;根据运动参数,生成该车辆的三维旋转向量(rx,ry,rz)和三维平移向量(tx,ty,tz);利用该三维旋转向量(rx,ry,rz)和该三维平移向量(tx,ty,tz),对预设的道路模型中的道路纹理进行处理,该道路纹理包括道路部分和车道线,该道路部分和该车道线的颜色不同;将处理后的道路纹理叠加上该道路模型上;对叠加该处理后的道路纹理的道路模型进行刚体变换和透视变换,得到生成视觉补偿图像。在该实现方式中,通过利用的陀螺仪和加速度计等运动传感器实时检测的数据,利用该数据对预设的道路模型图像进行处理,生成视觉补偿图像,不需要利用摄像头,计算量和数据传输量比较低。
在第一方面一种可能的实现方式中,该利用该三维旋转向量(rx,ry,rz)和该三维平移向量(tx,ty,tz),对预设的道路模型进行处理,包括:利用该三维旋转向量中Z轴上的曲率rz对该道路纹理施加弯曲变形;利用该三维平移向量中Y轴上的速度ty对该道路纹理施加循环移动变形;其中,该三维旋转向量(rx,ry,rz)和该三维平移向量(tx,ty,tz)中的X轴的正方向为车身右手方向,Y轴的正方向为车头前进方向,Z轴的正方向为车身正上方。在该实现方式中,经过对道路纹理施加循环移动变形和弯曲变形后,便可以得到处理后的道路纹理。由于上述的处理过程是实时的,因此理后的道路纹理可以反映出车辆实时的运动状态。也就是说,在处理后的道路纹理中,道路部分和车道线的形状、位置等是随着车辆的运动实时发生变化的,道路纹理中道路部分和车道线的变化可以反映车辆相对于道路部分和车道线实时的运动状态。
示例性的,经过对道路纹理施加弯曲变形后,道路纹理的形变便可以反映出车辆实时运动过程中绕着Z轴的运动状态,例如车辆的左右转弯时运动状态便可以从道路纹理的形变体现出来。
在第一方面一种可能的实现方式中,利用如下公式对该道路纹理施加弯曲变形:
其中,u’表示该道路纹理产生对应曲率rz弯曲后u轴的值,v’表示该道路纹理产生对应曲率rz弯曲后v轴的值,u表示该道路纹理产生对应曲率rz弯曲前u轴的值,v表示该道路纹理产生对应曲率rz弯曲前v轴的值,k为控制转向角度和纹理曲率的参数,该道路纹理存中存在纹理坐标系,该纹理坐标系包括u轴和v轴,u轴为垂直于车道线的方向,v轴为平行于车道线的方向。在该实现方式中,可以提高对道路纹理施加弯曲变形的准确性。
示例性的,经过对道路纹理施加循环移动变形后,道路纹理的形变便可以反映出车辆实时运动过程中绕着Y轴的运动状态,例如车辆在崎岖不平的道路上的颠簸状态在便可以从道路纹理的形变体现出来。
在第一方面一种可能的实现方式中,利用如下公式对该道路纹理施加循环移动变形:
其中,u’表示该道路纹理产生对应速度ty循环移动后u轴的值,v’表示该道路纹理产生对应速度ty循环移动后v轴的值,u表示该道路纹理产生对应速度ty循环移动前u轴的值,v表示该道路纹理产生对应速度ty循环移动前v轴的值,s为控制纹理移动速度的参数,该道路纹理中存在纹理坐标系,该纹理坐标系包括u轴和v轴,u轴为垂直于车道线的方向,v轴为平行于车道线的方向。在该实现方式中,可以提高对道路纹理施加循环移动变形的准确性。
在第一方面一种可能的实现方式中,对叠加该处理后的道路纹理的道路模型进行刚体变换,包括:利用向量(rx,ry,tz)对叠加该处理后的道路纹理叠的道路模型进行刚体变换。在该实现方式中,该视觉补偿图像经过透视变换后即为车内乘客视角的视觉补偿图像,可以提高视觉补偿图像的准确性。
在第一方面一种可能的实现方式中,该将该视觉补偿图像显示在车载娱乐信息系统的显示屏上,包括:将该视觉补偿图像叠加显示在显示屏显示的主界面上。在该实现方式中,实现利用视觉补偿图像激励用户的视觉的同时,降低或者消除觉补偿图像对显示屏显示的主界面的影响。即在缓解或者消除视觉神经与前庭神经的冲突的基础上,降低对用户使用车载娱乐信息系统的影响,从而进一步提高用户体验。
在第一方面一种可能的实现方式中,该方法还包括:利用如下公式来确定该显示屏最终显示的主界面和视觉补偿图像:
A=α×C+(1-α)I
其中,C表示该视觉补偿图像,I表示该显示屏中显示的主界面,α为叠加显示时的透明度参数,α的取值范围为小于1,并且大于0,A表示最终显示在该显示屏上的主界面和该视觉补偿图像,α为预先配置的。在该实现方式中,可以自动调整视觉补偿图像的透明度,提高用户体验。
在第一方面一种可能的实现方式中,显示屏的显示界面上还存在用于用户手动调 整该视觉补偿图像透明度的控件。用户可以根据自己的需求,利用该控件手动调整视觉补偿图像的透明度。这样可以进一步的满足用户的需求,用户可以根据需要实时的调整视觉补偿图像的透明度,提高用户体验。
示例性的,车内计算平台可以实时地生成视觉补偿图像,或者,还可以是车载娱乐信息系统利用行车记录仪实时拍摄的图像进行处理,生成视觉补偿图像。或者,车载娱乐信息系统根据车内的陀螺仪和加速度计等运动传感器实时检测的数据,利用该数据对预设的道路模型图像进行处理,生成视觉补偿图像。或者,还可以是车内乘客使用的终端设备(例如手机、平板电脑等)利用行车记录仪实时拍摄的图像进行处理,生成视觉补偿图像,或者根据车内的陀螺仪和加速度计等运动传感器实时检测的数据,利用该数据对预设的道路模型图像进行处理,生成视觉补偿图像。
示例性的,如果是车内乘客使用的终端设备生成视觉补偿图像,终端设备可以将该视觉补偿图像发送给车载娱乐信息系统。
示例性的,车内乘客使用的终端设备生成视觉补偿图像后,也可以最终显示在终端设备的显示屏上。
示例性的,视觉补偿图像可以以悬浮窗的形式显示在主界面上、或者,和主界面以分屏的方式显示在显示屏上,或者,还可以通过图叠加显示的方式在显示屏上显示。
第二方面,提供了一种基于视觉补偿图像的防晕车装置,该装置包括:处理器及存储器;该处理器和存储器耦合,该存储器存储有程序指令,当该存储器存储的程序指令被该处理器执行时执行以上第一方面或者第一方面的任意一方面可能的实现方式中的方法。
第三方面,提供了一种基于视觉补偿图像的防晕车装置,该装置包括至少一个处理器和接口电路,至少一个处理器用于执行以上第一方面或者第一方面中的任意一方面可能的实现方式中的方法。
示例性的,该基于视觉补偿图像的防晕车装置可以为车内计算平台、车载娱乐信息系统、车内乘客使用的终端设备等,或者,车内计算平台、车载娱乐信息系统、车内乘客使用的终端设备包括该基于视觉补偿图像的防晕车装置。
第四方面,提供了一种基于视觉补偿图像的防晕车系统,该系统包括:车内计算平台和车载娱乐信息系统,该系统用于执行以上第一方面或者第一方面中的任意一方面可能的实现方式中的方法。
在第四方面一种可能的实现方式中,该系统还包括:行车记录仪和运动传感器中的至少一种,该运动传感器包括:陀螺仪和加速度计。
第五方面,提供了一种车辆,该车辆包括:第二方面或者第三方面提供的基于视觉补偿图像的防晕车装置,或者,第四方面或者第四方面中的任意一方面可能的实现方式中提供的基于视觉补偿图像的防晕车系统。
第六方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序在被处理器执行时,用于执行第一方面或者第一方面中的任意可能的实现方式中的方法。
第七方面,提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当该计算机程序被执行时,用于执行第一方面或者第一方面中的任意可能 的实现方式中的方法。
第八方面,提供了一种芯片,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片的通信设备执行第一方面或者第一方面中的任意可能的实现方式中的方法。
附图说明
图1是本申请实施例提供的一例适用于本申请实施例的通信系统架构的示意图。
图2是本申请实施例提供的一例基于视觉补偿图像的防晕车方法的示意性流程图。
图3是本申请实施例提供的一例车内计算平台利用行车记录仪实时拍摄的图像生成视觉补偿图像的示意性流程图。
图4是本申请实施例提供的一例车内计算平台利用行车记录仪实时拍摄的图像生成的视觉补偿图像的示意图。
图5是本申请实施例提供的一例车内计算平台利用的陀螺仪和加速度计实时检测的数据生成视觉补偿图像的示意性流程图。
图6是本申请实施例提供的一例车辆上X轴、Y轴、Z轴的示意图。
图7是本申请实施例提供的一例道路模型的示意图。
图8是本申请实施例提供的一例对道路纹理施加循环移动变形和弯曲变形前和变形后的示意图。
图9是本申请实施例提供的一例对道路模型施加进行透视变换得到的车内乘客视角的视觉补偿图像得示意图。
图10是本申请实施例提供的一例将视觉补偿图像叠加显示在显示屏后显示的示意性界面图。
图11是本申请实施例提供的一例基于视觉补偿图像的防晕车装置结构的示意性框图。
图12是本申请实施例提供的另一例基于视觉补偿图像的防晕车装置结构的示意性框图。
图13是本申请实施例提供的一例芯片系统结构的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
另外,本申请的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程 技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
本申请实施例的技术方案可以应用于各种通信系统,例如:全球移动通讯(Global System of Mobile communication,GSM)系统、码分多址(Code Division Multiple Access,CDMA)系统、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)系统、通用分组无线业务(General Packet Radio Service,GPRS)、长期演进(Long Term Evolution,LTE)系统、LTE频分双工(Frequency Division Duplex,FDD)系统、LTE时分双工(Time Division Duplex,TDD)、通用移动通信系统(Universal Mobile Telecommunication System,UMTS)、全球互联微波接入(Worldwide Interoperability for Microwave Access,WiMAX)通信系统、第五代(5th Generation,5G)系统或新无线(New Radio,NR)等。
另外,本申请的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
当人眼感知到运动与位于耳内的前庭系统感知的运动不相符时,就会有昏厥、恶心、食欲减退等症状出现,医学上称为晕动症(Motion sickness),这种症状易见于颠簸的封闭环境,如汽车、轮船、飞机等。例如:乘坐于汽车后排的乘客常易发生晕车,也就是我们常见的晕车症。乘客发生晕车是因为车内封闭环境阻挡了视线,视觉通道无法给乘客提供有效的运动状态反馈,即无法为乘客提供随车身相对于地面的运动状态,而位于耳朵内的前庭系统却可以感觉到车身相对于地面的运动状态,从而产生视觉与前庭神经感知的冲突。基于以上原理,解决晕车的关键在于消除或者缓解视觉神经与前庭神经的冲突。
目前,对于解决或者预防晕车而言,通常的做法是药物疗法,通过服用抑制前庭神经或中枢神经的药物,来消除或者缓解视觉神经与前庭神经的冲突从而缓解晕动症状。例如,常用的药物有茶苯海明、镇静剂等。但是,药物疗法会对乘客产生副作用,例如,会使得乘客产生嗜睡、疲乏等症状,影响用户的身体健康。
市场中还有一种腕带,乘客佩戴该腕带后,通过腕带压迫关内穴产生抑制前庭神 经的作用,来消除或者缓解视觉神经与前庭神经的冲突从而缓解晕动症状。但是乘客需要佩戴该腕带,佩戴的舒适度不高,对用户不够友好,用户体验较低。
目前,业内也有相应的方案来解决乘客的晕车问题,例如防晕车眼镜,防晕车眼镜可以采用四个圆环的设计,前面两个,两侧各一个,每个圆环里面均有蓝色的液体,液体可以摇晃,汽车行驶时的加减速变化可以通过液平面表现出来。乘客戴上防晕车眼镜后,眼部能感觉到蓝色液体的运动,就可以减缓视觉神经与前庭神经的冲突,也就解决了晕车晕船的问题。
但是采用如上的方案,乘客需要佩戴防晕车眼镜,佩戴的舒适度不高,对用户不够友好,并且不适合近视用户,用户体验较低。
基于同样原理,目前还设计出一些增强现实(augmented reality,AR)眼镜或者虚拟现实(virtual reality,VR)眼镜,其原理是在AR眼镜或者VR眼镜的显示画面中绘制水平仪或其它能够反映乘客自身运动状态的图像。乘客自身运动状态可以从AR眼镜或者VR眼镜中内置的传感器(例如陀螺仪、加速度计等)获取。
但是采用如上的方案,乘客同样需要AR眼镜或者VR眼镜,佩戴的舒适度不高,对用户不够友好,并且不适合近视用户,用户体验较低。
有鉴于此,本申请提供了一种基于视觉补偿图像的防晕车方法,通过实时地生成视觉补偿图像,该视觉补偿图像可以反映出车辆相对道路(或者地面)的实时运动状态,该视觉补偿图像中包括道路和车道线。将该视觉补偿图像叠加在车载信息娱乐系统的显示屏上进行显示,用户便可在该显示屏上看到自己相对于地面运动状态,从而缓解或者消除视觉神经与前庭神经的冲突,也就解决了晕车的问题。并且,不需要用户佩戴其他特制的眼镜,对用户友好,提高用户体验。
下面将具体说明本申请提供的基于视觉补偿图像的防晕车方法。
图1所示的为一例适用于本申请实施例的通信系统架构的示意图,如图1所示的,该通信系统包括:位于车辆内部的行车记录仪和运动传感器(例如陀螺仪和加速度计等)中的至少一个、以及车内计算平台和车载娱乐信息系统。其中,车内计算平台可以理解为车内的处理器。例如,在一些实施例中,车内计算平台可以包括:座舱域控制器(Cockpit Domain Controller,CDC)、车上的电子控制单元(electronic control unit,ECU)、行车电脑、车载电脑或者车载T-BOX等,本申请实施例在此不做限制。车载娱乐信息系统可以包括显示屏,例如,该显示屏可以设置在前排座椅的背面,后排乘客可以利用该车载娱乐信息系统观看视频、图片、浏览网页等。
可选的,在一些实施例中,车内计算平台、车载娱乐信息系统以及行车记录仪之间通信连接,可以相互传输数据。例如,如图1所示的,车载娱乐信息系统以及行车记录仪分别和车内计算平台通过数据线(即有线的方式)进行连接。应该理解,在本申请的其他实施例中,车载娱乐信息系统以及行车记录仪分别也可和车内计算平台通过无线连接的方式(例如蓝牙、无线保真(wireless fidelity,Wi-Fi)网络,近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等)进行通信连接,本申请实施例在此不作限制。
可选的,在一些实施例中,车内设置的运动传感器例如包括陀螺仪和加速度计等。例如,陀螺仪和/或加速度计可以设置在车载娱乐信息系统中,或者设置在车内计算平 台,或者设置在车内的其他位置上,本申请实施例对此不作限制。这些运动传感器可以检测车辆相对于地面的运动参数,车内计算平台或者车载娱乐信息系统可以获取并对这些运动参数进行处理。
应理解的是,图1所示意的系统架构并不构成对适用于本申请实例通信系统架构的具体限定。在本申请另一些实施例中,适用于本申请实例系统架构可以包括比图1所示的更多或更少部件,或者不同的部件等,本申请实施例在此不作限制。
例如,本申请实施例还可以应用在包括轮船、飞机等其它交通工具中,从而帮助乘客在乘坐这些交通工具时缓解或者消除晕船、晕机的问题等。又例如,本申请实施例还可以应用在包括车内乘客使用的终端设备以及车内设置的运动传感器的系统中等,本申请实施例在此不作限制。并且,图1中所示的部件可以以硬件,软件或软件和硬件的组合实现。
下文中将以汽车为例说明本申请提供的方法,但应该理解的是,本申请提供的方法还可以应用在轮船或者飞机等其他交通工具中。
图2所示的为一例本申请提供的基于视觉补偿图像的防晕车方法的示意性流程图。图2所示的方法可以应用在图1所示的通信系统中。在图2所示的例子中,以包括车内计算平台和车载娱乐信息系统的场景为例说明,但这不应该本对本申请实施例造成限制。例如,下述方法S210、S220以及S230的执行主体还可以是车内乘客使用的终端设备。
如图2所示的,该方法包括:S210至S230。
S210,车内计算平台实时地生成视觉补偿图像,该视觉补偿图像反映车内乘客视角下车辆相对于道路的实时运动状态。
可选的,在本申请实施例中,作为一种可能的实现方式,车内计算平台可以对行车记录仪实时拍摄的图像进行处理,生成视觉补偿图像,该视觉补偿图像为车内乘客视角的视觉补偿图像,可以反映车辆相对于道路(或者周围环境)的实时运动状态。或者,作为另一种可能的实现方式,车内计算平台也可以根据车内的陀螺仪和加速度计等运动传感器实时检测的数据,利用该数据对预设的道路模型图像进行处理,生成视觉补偿图像,该视觉补偿图像为车内乘客视角的视觉补偿图像,可以反映车辆相对于道路(或者周围环境)的实时运动状态。
下面将对这两种方式分别进行说明。
在本申请实施例中,作为一种可能的实现方式,如图3所示的,图3所示的为本申请实施例提供的一例车内计算平台利用行车记录仪实时拍摄的图像生成视觉补偿图像的示意性流程图,如图3所示的,该方法包括S210a至S212a:
S210a,行车记录仪实时地拍摄图像。
在本申请实施例中,安装车内的行车记录仪可以实时地拍摄汽车在行使的过程中汽车前方道路以及周围环境的图像。例如,该拍摄图像中可以包括汽车当前行使的道路信息和车道信息。该拍摄图像中的内容可以表示车内乘客所看到车外环境(例如包括道路、车道线等)运动情况,即为车内乘客视角的图像。
S211a,车内计算平台获取拍摄图像后,检测该拍摄图像中的道路部分F和车道线L。
在一些实施例中,行车记录仪车可以通过控制器局域网(controller area network,CAN)总线、数据线、或者无线通信的方式,将拍摄得到的图像实时地发送给车内计算平台。
车内计算平台获取到该图像后,可以利用人工神经网络算法在拍摄图像中检测出道路部分F和车道线L,这样可以避免拍摄图像中其它背景的干扰,具有更好的可视化效果。
例如,车内计算平台可以利用Mask R-CNN算法执行道路和车道线检测,从而在该图像中确定出道路部分F和车道线L。应该理解,在本申请的其他实施例中,车内计算平台还可以利用其他算法在该图像中确定出道路部分F和车道线L,本申请实施例在此不作限制。
S212a,车内计算平台对拍摄图像中的道路部分F和车道线L分别填充不同的颜色,得到用户视角的视觉补偿图像。
在车内计算平台对拍摄图像中的道路部分F和车道线L分别填充不同的颜色的过程中,作为一种可能的实现方式,车内计算平台实时地在该拍摄图像中检测出道路部分F和车道线L后,可以得到一个包括道路部分F和车道线L后的图像。车内计算平台还可以对该图像进行像素转换得到二值图像,即得到一个该拍摄图像对应的二值图像。
其中,二值图像(Binary Image)是指图像上的每一个像素只有两种可能的取值或灰度等级状态,也就是说,图像中的任何像素点的灰度值均为0或者255,分别代表黑色(0)和白色(255),整个图像呈现出只有黑和白的视觉效果。
在一些实施例中,该二值图像中包括道路部分F和车道线L,道路部分F和车道线L上的像素点的灰度值均为255,除道路部分F和车道线L之外的其它像素的灰度值均为0。在二值图像中,像素点的灰度值为255表示该像素点为白色,像素点的灰度值为0表示该像素点为黑色。换句话说,在该二值图像中,道路部分F和车道线L均为白色,除过道路部分F和车道线L之外区域均为黑色。
在一些实施例中,在对该二值图像中道路部分F和车道线L分别填充不同的颜色的过程中,可以使用如下公式(1)对二值图像中的像素进行填充颜色:
在公式(1)中,p表示该二值图像中的某一个像素点,C表示视觉补偿图像,C(p)表示视觉补偿图像中像素点p的颜色。L(p)表示在该二值图像中车道线上的像素点p的灰度,L(p)=255表示在该二值图像中车道线L上的像素点p的灰度为白色,即该二值图像中车道线上的像素点p的灰度值为255。F(p)表示在该二值图像中道路部分F的像素点p的灰度,F(p)=255表示在该二值图像中道路部分的像素点p的灰度为白色,即该二值图像中道路部分的像素点p的灰度值为255。可选的,在公式(1)中,F(p)=255可以替换为F(p)=1,即F(p)=1也可表示在该二值图像中道路部分的像素点p的灰度为白色。L(p)=255也可以替换为L(p)=1,即L(p)=1也可以表示在该二值图像中车道线上的像素点p的灰度为白色。
在公式(1)中,利用像素点p在红(R)、绿(G)、蓝(B)以及alpha这四个通道上取值表示该视觉补偿图像中像素点p的颜色。例如在公式(1),(255,255,255,255)分别表示视觉补偿图像中像素点p在红(R)、绿(G)、蓝(B)以及alpha这四个通道上取值,其中,R、G、B通道上的取值均为255,alpha通道上的取值也为255。
在本申请实施例中,alpha通道上的取值为255表示该像素点为完全不透明的,alpha通道上的取值为0表示该像素点为完全透明的,alpha通道上的取值在0至255之间表示该像素点为半透明。
例如,在利用公式(1)对二值图像中像素填充颜色的过程中,如果像素点p位于车道线L上,并且,像素点p的灰度为白色,即L(p)=255,则对该像素点p进行填充颜色的过程中,R、G、B通道上的取值均为255,alpha通道上的取值也为255,表示该像素点p在视觉补偿图像中为完全不透明的,并且为白色;
在利用公式(1)对二值图像中像素填充颜色的过程中,如果像素点p位于道路部分F,并且,像素点p的灰度为白色,即F(p)=255,则对该像素点p进行填充颜色的过程中,R、G、B通道上的取值均为75,alpha通道上的取值也为255,表示该像素点p在视觉补偿图像中为完全不透明的,并且颜色和车道线上像素点的颜色不同。
在利用公式(1)对二值图像中像素填充颜色的过程中,如果像素点p位于道路部分F和车道线L之外的其他位置,即该像素点p的灰度为黑色(像素点p的灰度值为0),则对该像素点p进行填充颜色的过程中,R、G、B通道上的取值均为0,alpha通道上的取值也为0,表示该像素点p在视觉补偿图像中为完全透明的。
在利用公式(1)对二值图像中像素填充颜色的过程中,将道路部分F和车道线L填充为非透明的,将除道路部分和车道线之外的其他区域填充为透明的,得到视觉补偿图像,可以避免行车记录仪拍摄的图中的其他区域(即除过道路部分和车道线之外的区域)对图像的干扰,即避免了视觉补偿图像中道路背景的干扰,具有更好的可视化效果,可以在解决乘客晕车的基础上,提高该视觉补偿图像的准确性。
进一步的,在利用公式(1)对二值图像中像素填充颜色的过程中,由于人类视觉神经的简单细胞对边缘敏感,因此提取或生成道路和车道线,并将道路部分F和车道线L填充为不同的颜色,得到视觉补偿图像。乘客可以直观的看到道路部分F和车道线L之间的分界线或者边缘,从而使得乘客可以从视觉补偿图像中分辨出道路部分F和车道线L,可以感知到自己相对于道路部分F和车道线L的运动状态,实现最大化地激励用户的视觉。
应理解,在本申请实施例中,在对该二值图像中道路部分F和车道线L对分别填充不同的颜色的过程中,也可以使用和公式(1)不同的公式对该二值图像中道路部分F和车道线L分别填充不同的颜色。例如,还可以利用如下公式(2)对二值图像中道路部分F和车道线L对分别填充不同的颜色:
在利用公式(2)对二值图像中像素填充颜色的过程中,如果像素点p位于车道线 L上,并且,像素点p的灰度为白色,即L(p)=255,则对该像素点p进行填充颜色的过程中,像素点p在R、G、B通道上的值分别为R1、G1、B1,像素点p在alpha通道上的灰度值为alpha1,R1、G1、B1的值可以不同,alpha1的值也可为0之外的其他值。
在利用公式(2)对二值图像中像素填充颜色的过程中,如果像素点p位于道路部分F,并且,像素点p的灰度为白色,即F(p)=255,则对该像素点p进行填充颜色的过程中,像素点p在R、G、B通道上的值分别为R2、G2、B2,像素点p在alpha通道上的灰度值为alpha2,R2、G2、B2的值可以不同,alpha2的值也可为0之外的其他值。并且,R1、G1、B1组合而成的颜色与R2、G2、B2组合而成的颜色不同,即道路部分F和车道线L为不同的颜色。alpha2和alpha1的值可以相同,也可以不同。
在利用公式(2)对二值图像中像素填充颜色的过程中,如果像素点p位于道路部分和车道线之外的其他位置,则对该像素点p进行填充颜色的过程中,像素点p在R、G、B通道上的值分别为R3、G3、B3,为了避免道路背景的干扰,像素点p在alpha通道上的灰度值可以为0(即为完全透明),R3、G3、B3的值可以不同。可以理解,像素点p在alpha通道上的灰度值也可以不为0,本申请对此不做限定。
通过利用上述的公式(1)或者公式(2),对该二值图像中的道路部分F和车道线L分别填充不同的颜色得到视觉补偿图像,乘客可以从得到视觉补偿图像中分辨出道路部分F和车道线L,感知到自己相对于道路部分F和车道线L的实时运动状态。由于是对二值图像中的像素进行填充颜色,可以降低对像素填充颜色的复杂度和计算量,容易实现。并且,将除道路部分和车道线之外的其他区域填充为透明的,避免了图像中道路背景的干扰,具有更好的可视化效果。
例如,图4所示的为本申请提供的一例车内计算平台利用行车记录仪实时拍摄的图像生成的视觉补偿图像的示意图,其中,图4中的a图所示为行车记录仪拍摄的图像,图4中的b图所示的为车内计算平台对该图像进行像素转换得到二值图像后,在该二值图像中对道路部分F和车道线L分别填充不同颜色得到的视觉补偿图像。从图4中的b图可以看出,在利用公式(1)或者公式(2)对二值图像中的道路部分F和车道线L分别填充不同的颜色后,该二值图像中仅仅包括道路部分F和车道线L,并且,两者的颜色不同。图4中的a图中所示的其他部分(例如远处的天空、路边的草地等)全部为透明的,不会显示在图4中的b图所示的视觉补偿图像中。
在车内计算平台对该拍摄图像中的道路部分F和车道线L分别填充不同的颜色的过程中,作为另一种可能的实现方式,在车内计算平台在拍摄图像中检测出道路部分F和车道线L后,可以得到一个包括道路部分F和车道线L后的图像。车内计算平台也可以直接对该图像中的道路部分F和车道线L分别填充不同的颜色得到视觉补偿图像,而不需要将该图像进一步的转换为二值图像。
在车内计算平台对该图像中的道路部分F和车道线L分别填充不同的颜色的过程中,可选的,可以利用如下的公式(3)对拍摄图像中的像素进行填充颜色:
在利用公式(3)对该图像中的像素填充颜色的过程中,如果像素点p位于车道线 L上,即像素点p属于车道线L包括的像素,则对该像素点p进行填充颜色的过程中,像素点p在R、G、B通道上的值分别为R1、G1、B1,像素点p在alpha通道上的灰度值为alpha1,R1、G1、B1的值可以不同,alpha1的值也可为除0之外的其他值。
在利用公式(3)对该图像中像素填充颜色的过程中,如果像素点p位于道路部分F,即像素点p属于道路部分F包括的像素,则对该像素点p进行填充颜色的过程中,像素点p在R、G、B通道上的值分别为R2、G2、B2,像素点p在alpha通道上的灰度值为alpha2,R2、G2、B2的值可以不同,alpha2的值也可为除0之外的其他值。并且,R1、G1、B1组合而成的颜色与R2、G2、B2组合而成的颜色不同,即道路部分F上和车道线L为不同的颜色。alpha2和alpha1的值可以相同,也可以不同。
在利用公式(3)对该图像中像素填充颜色的过程中,如果像素点p位于道路部分和车道线之外的其他位置,则对该像素点p进行填充颜色的过程中,像素点p在R、G、B通道上的值分别为R3、G3、B3,像素点p在alpha通道上的灰度值为0(即为完全透明),R3、G3、B3的值可以不同。可以理解,像素点p在alpha通道上的灰度值也可以不为0,本申请对此不做限定。
通过利用上述的公式(3)对该图像中的道路部分F和车道线L分别填充不同的颜色,得到视觉补偿图像。乘客可以从该视觉补偿图像中分辨出道路部分F和车道线L,感知到自己相对于道路部分F和车道线L的实时的运动状态。将除道路部分和车道线之外的其他区域填充为透明的,避免了图像中道路背景的干扰,具有更好的可视化效果。
应理解,在本申请实施例中,在车内计算平台对拍摄图像中的道路部分F和车道线L分别填充不同的颜色的过程中,还可以利用其它不同的公式或者方法对该拍摄图像中的道路部分F和车道线L分别填充不同的颜色,只要填充后将除过道路部分和车道线之外的其他区域填充为透明的或者接近透明的,将道路部分F和车道线L填充为不同的颜色即可,本申请实施例在此不作限制。
还应理解,上述的S210a至S212a均为实时处理的过程,即车内计算平台可以实时地生成视觉补偿图像。
在本申请实施例中,作为另一种可能的实现方式,如图5所示的,图5所示的为本申请实施例提供的一例车内计算平台利用的陀螺仪和加速度计等运动传感器实时检测的数据,利用该数据对预设的道路模型图像进行处理,生成视觉补偿图像的示意性流程图,如图5所示的,该方法包括:S210b至S213b。
S210b,车内的陀螺仪和加速度计实时地检测车辆的运动参数。
在本申请实施例中,安装车内的陀螺仪和加速度计等可以实时地检测汽车在行使的过程中车辆的运动参数。例如,该运动参数可以包括车辆运动过程中,车身在X轴、Y轴、Z轴上的运动参数等,本申请实施例在此不作限制。
S211b,车内计算平台获取陀螺仪和加速度计实时地检测的运动参数并进行处理,得到车辆运动的三维旋转向量和三维平移向量。
在一些实施例中,陀螺仪和加速度计可以通过CAN总线、数据线、或者无线通信的方式,将实时检测的运动数据发送给车内计算平台。
车内计算平台获取到陀螺仪和加速度计实时地检测的运动参数后,可以对该运动 参数进行处理,得到车辆在X轴、Y轴、Z轴上的转动角速度和运动加速度,将该转动角速度和运动加速度经过滤波和时间积分后得到三维旋转向量(rx,ry,rz)和三维平移向量(tx,ty,tz)。
在一些实施例中,车辆的X轴、Y轴、Z轴的示意图如图6所示的,在本申请实施例中,作为一种可能的实现方式,车辆的X轴的正方向可以为车身右手方向(即驾驶车辆时车辆前进方向的右侧),车辆的Y轴的正方向可以为车头前进方向(即驾驶车辆时车辆前进方向),Z轴的正方向可以为车身正上方(即垂直于X轴和Y轴所在的平面,并且指向车辆顶部)。
其中,三维旋转向量(rx,ry,rz)和三维平移向量(tx,ty,tz)可以表征车辆实时的运动状态。
S212b,车内计算平台根据车辆的三维旋转向量和三维平移向量,对预设的道路模型中的道路纹理进行循环移动变形和弯曲变形,得到处理后的道路纹理。
在一些实施例中,可以在车内计算平台中预先存储道路模型(或者也可以在车载娱乐信息系统中预先存储道路模型)。道路模型可以通过平面建模形成。道路模型包括道路纹理,道路纹理包括道路部分F和车道线L,车道线L包括一定宽度的实线和虚线。道路部分F和车道线L的颜色不同,乘客可以直观在道路模型中分辨出道路部分F和车道线L之间的分界线或者边缘,从而使得乘客可以分辨出道路部分F和车道线L。
在本申请实施例中,在道路纹理中存在纹理坐标系,纹理坐标系包括u轴和v轴,其中,u轴为垂直于车道线L的方向,v轴为平行于车道线L的方向。
例如,图7所示的为一例道路模型的示意图,如图7所示的,道路模型包括道路纹理,道路纹理包括道路部分F和车道线L,道路部分F和车道线L的颜色不同,在道路纹理中存在纹理坐标系,纹理坐标系包括u轴和v轴。
在得到三维旋转向量(rx,ry,rz)后,利用三维旋转向量中的rz,使得道路纹理产生对应曲率rz的弯曲,即对道路纹理施加弯曲变形,例如,可以利用如下公式(4)使得道路纹理产生对应曲率rz的弯曲变形:
在公式(4)中,u’表示道路纹理产生对应曲率rz弯曲后u轴的值,v’表示道路纹理产生对应曲率rz弯曲后v轴的值,u表示道路纹理产生对应曲率rz弯曲前u轴的值,v表示道路纹理产生对应曲率rz弯曲前v轴的值。k为控制转向角度和纹理曲率的参数。
示例性的,k的取值范围可以为0.05≤k≤0.20。k的取值越小,则经过弯曲变形后的道路纹理的弯曲效果越明显。例如,k的取值可以为0.1234134。
经过对道路纹理施加弯曲变形后,道路纹理的形变便可以反映出车辆实时运动过程中绕着Z轴的运动状态,例如车辆的左右转弯时运动状态便可以从道路纹理的形变体现出来。
在得到三维平移向量(tx,ty,tz)后,利用三维平移向量中的ty,使得道路纹理产生 对应速度ty的循环移动,即对道路纹理施加循环移动变形,例如,可以利用如下公式(5)使得道路纹理产生对应速度ty的循环移动:
在公式(5)中,u’表示道路纹理产生对应速度ty的循环移动后u轴的值,v’表示道路纹理产生对应速度ty的循环移动后v轴的值,u表示道路纹理产生对应速度ty的循环移动前u轴的值,v表示道路纹理产生对应速度ty的循环移动前v轴的值。s为控制纹理移动速度的参数。
其中,s取值与纹理的物理长度有关,s的取值越大,则经过循环移动后的道路纹理的移动速度越快。以图7为例,假设在图7所示的例子中,中间的一段虚车道线的长度为4m,两段虚车道线之间的空白段(即道路部分)的长度6m,车道线的总长度为30m,假设车速为60km/h,三维平移向量的采样间隔为100ms,则100ms内汽车沿Y方向移动的距离ty=1.67m,则s取值为1.67/30=0.05567。
经过对道路纹理施加循环移动变形后,道路纹理的形变便可以反映出车辆实时运动过程中绕着Y轴的运动状态,例如车辆在崎岖不平的道路上的颠簸状态在便可以从道路纹理的形变体现出来。
经过对道路纹理施加循环移动变形和弯曲变形后,便可以得到处理后的道路纹理。由于上述的处理过程是实时的,因此理后的道路纹理可以反映出车辆实时的运动状态。也就是说,在处理后的道路纹理中,道路部分F和车道线L的形状、位置等是随着车辆的运动实时发生变化的,道路纹理中道路部分F和车道线L的变化可以反映车辆相对于道路部分和车道线实时的运动状态。
例如,图8所示的为一例对道路纹理施加循环移动变形和弯曲变形前和变形后的示意图,如图8中的a图所示的,在对道路纹理施加循环移动变形和弯曲变形前,道路部分F和车道线L是静止不变的。如图8中的b图所示的,在对道路纹理施加循环移动变形和弯曲变形前,道路部分F和车道线L的位置、形状等会随着车辆的运动状态的变化。
S213b,车内计算平台将处理后的道路纹理设置在道路模型上,对该道路模型施加刚体变换和透视变换,得到视觉补偿图像。
在一些实施例中,在得到循环移动变形和弯曲变形后的道路纹理后,车内计算平台可以将处理后的道路纹理设置在道路模型上,例如贴在道路模型上,然后对该道路模型施加刚体变换,例如,可以使用向量(rx,ry,tz)对该道路模型施加刚体变换,然后对刚体变换完成之后的道路模型施加进行透视变换,便可以得到视觉补偿图像。该视觉补偿图像经过透视变换后即为车内乘客视角的视觉补偿图像。例如,图9所示的为一例对道路模型施加进行透视变换得到的车内乘客视角的视觉补偿图像的示意图。如图9所示的,该视觉补偿图可以表征车内乘客所看到车外的运动情况,即反映车内乘客视角下车辆相对于道路的实时运动状态。
应理解,上述的S210b至S213b均为实时处理的过程,即车内计算平台可以实时地生成视觉补偿图像。
还应理解,在本申请的其他实施例中,车内计算平台还可以通过其他方法实时地 生成视觉补偿图像,只要该视觉补偿图像可以反映车内乘客视角下车辆相对于道路的实时运动状态即可,本申请实施例在此不作限制。
还应理解,在上述的例子中,是以车内计算平台实时地生成视觉补偿图像为例进行说明,在本申请的其他实施例中,还可以是车载娱乐信息系统利用行车记录仪实时拍摄的图像进行处理,生成视觉补偿图像。或者,车载娱乐信息系统根据车内的陀螺仪和加速度计等运动传感器实时检测的数据,利用该数据对预设的道路模型图像进行处理,生成视觉补偿图像。或者,还可以是车内乘客使用的终端设备(例如手机、平板电脑等)利用行车记录仪实时拍摄的图像进行处理,生成视觉补偿图像,或者根据车内的陀螺仪和加速度计等运动传感器实时检测的数据,利用该数据对预设的道路模型图像进行处理,生成视觉补偿图像。具体的过程和上述例子中具体描述类似,为了简洁,这里不再赘述。
S220,车内计算平台将该视觉补偿图像发送给车载娱乐信息系统。
在一些实施例中,车内计算平台可以通过CAN总线、数据线、或者无线通信的方式,将得到的视觉补偿图像C实时地发送给车载娱乐信息系统。
在另一些实施例中,车内计算平台还可以根据车载娱乐信息系统中显示屏的尺寸等,将视觉补偿图像C进行裁剪或者缩放等,然后将裁剪或者缩放后的视觉补偿图像C发送给车载娱乐信息系统。
可选的,在本申请的一些实施例中,如果是车载娱乐信息系统生成视觉补偿图像C,则该方法可以不包括S220。
可选的,在本申请的一些实施例中,如果是车内乘客使用的终端设备生成视觉补偿图像C,则S220可以替换为:终端设备将该视觉补偿图像发送给车载娱乐信息系统。
可选的,在本申请的一些实施例中,如果是车内乘客使用的终端设备生成视觉补偿图像C并最终显示在终端设备的显示屏上,则该方法也可以不包括S220。
S230,车载娱乐信息系统将视觉补偿图像显示在显示屏上。
在一些实施例中,可以将视觉补偿图像叠加显示在显示屏上。由于车内乘客(例如后排乘客)可以利用车载娱乐信息系统的显示屏观看视频、图片、浏览网页等,在该显示屏上还显示有乘客观看的视频、图片、浏览网页的界面(称为主界面)。因此,为了降低或者消除该视觉补偿图像对乘客浏览的主界面的影响,在将视觉补偿图像叠加显示在显示屏时,可以调整视觉补偿图像的透明度,从而可以实现利用视觉补偿图像激励用户的视觉的同时,降低或者消除觉补偿图像对显示屏显示的主界面的影响。即在缓解或者消除视觉神经与前庭神经的冲突的基础上,降低对用户使用车载娱乐信息系统的影响,从而进一步提高用户体验。
示例性的,在一些实施例中,可以利用如下公式(6)来确定显示屏最终显示的主界面和视觉补偿图像:
A=α×C+(1-α)I   (6)
在公式(6)中,C表示视觉补偿图像,I表示显示屏中显示的主界面,α为叠加显示时的透明度参数,α的取值范围为小于1,并且大于0,A表示最终显示在显示屏上的主界面和视觉补偿图像。
在本申请实施例中,可以通过调整α的大小调整最终显示的效果。例如:α的值 越大,表示越透明,则视觉补偿图像C的透明度越大,视觉补偿图像C对主界面的遮盖影响则越小,乘客可以透过视觉补偿图像C看到被视觉补偿图像C遮盖的主界面部分。α的值越小,表示越不透明,则视觉补偿图像C将会遮盖一部分主界面。
在一些实施例中,α的值可以是根据显示屏的具体情况(例如显示屏的分辨率、屏幕大小等)、外界光线等预先配置的。例如,白天时间段和晚上时间段配置的α的值可以不同,不同类型或者品牌的车辆配置的α的值也可以不同。
在一些实施例中,α的值也可以是用户根据自己的需要设置的。进一步的,用户还可以更新α的值。
例如,图10所示的为一例将视觉补偿图像叠加显示在显示屏后显示的界面的示意图,如图10所示的,视觉补偿图像C为透明的,视觉补偿图像C中的道路部分F和车道线L为不同的颜色。乘客可以透过视觉补偿图像C看到被视觉补偿图像C遮盖的主界面部分。
在另一些实施例中,除了车载娱乐信息系统自动调整(例如利用上述的公式(6))视觉补偿图像C的透明度之外,在显示屏的界面上还可以存在用于调整视觉补偿图像C透明度的控件,用户可以根据自己的需求,利用该控件手动调整视觉补偿图像C的透明度。这样可以进一步的满足用户的需求,用户可以根据需要实时的调整视觉补偿图像C的透明度,提高用户体验。
在一些实施例中,视觉补偿图像C也可以通过悬浮窗的形式设置在主界面上,用户可以用手调整视觉补偿图像C在主界面中的位置。例如,可以用手指拖住视觉补偿图像C在显示屏上移动,从而将视觉补偿图像C拖动至合适的位置上,这样可以进一步的提高用户体验。
在一些实施例中,在视觉补偿图像C通过悬浮窗的形式显示在主界面上时,可以不需要调整视觉补偿图像C的透明度,在这种情况下,视觉补偿图像C可以是不透明的,车载娱乐信息系统得到视觉补偿图像C后,可以直接通过悬浮窗的形式显示在主界面上,而不需要进行进一步的处理(例如不需要自动调整或者手动调整视觉补偿图像C的透明度)。
在一些实施例中,在视觉补偿图像C通过悬浮窗的形式显示在主界面上时,也可以利用自动调整或者手动调整的方式来调整视觉补偿图像C的透明度。
在另一些实施例中,视觉补偿图像C和主界面还可以以分屏的方式显示在车载娱乐信息系统的显示屏中,在这种情况下,视觉补偿图像C可以为不透明的,车载娱乐信息系统得到视觉补偿图像C后,可以直接通过和主界面以分屏的方式显示在显示屏中,而不需要进行进一步的处理(例如不需要自动调整或者手动调整视觉补偿图像C的透明度),从而可以降低计算量以及处理的复杂度,节约计算资源。
在另一些实施例中,视觉补偿图像C还可以以一个小图标的形式显示在显示屏的Dock上(Dock可以理解为显示屏下方显示一行“所有应用”的图标),在用户晕车时可以点击该视觉补偿图像C对应的小图标,从而可以在显示屏上打开视觉补偿图像C。在乘客点击视觉补偿图像C对应的小图标后,视觉补偿图像C可以以悬浮窗的形式显示在主界面上、或者,和主界面以分屏的方式显示在显示屏上,或者,还可以通过图10所示的叠加显示的方式在显示屏上显示。可选的,视觉补偿图像C的透明度也可以通 过自动调整或者手动调整的方式进行调整。
应该理解,除了上述的几种方式显示视觉补偿图像和主界面之外,在本申请的其他实施例中,还可以通过其他的方式在显示屏上显示视觉补偿图像和主界面。本申请实施例在此不作限制。
还应理解,上述的例子中,以车载娱乐信息系统将视觉补偿图像显示在显示屏上为例进行说明,在本申请的其他实施例中,还可以是车内计算平台控制将视觉补偿图像显示在载娱乐信息系统的显示屏上,即上述S230的执行主体还可以是车载娱乐信息系统。也就是说,上述S210至S230的执行主体可以是车内计算平台,或者,也可是车载娱乐信息系统,或者,还可以是包括车载娱乐信息系统和车内计算平台的系统,本申请实施例在此不作限制。
在一些实施例中,上述S210至S230的执行主体还可以是车内乘客使用的终端设备(例如手机、平板电脑等),在这种情况下,终端设备生成视觉补偿图像后,也可以将视觉补偿图像显示在终端设备的显示屏上供乘客观看。即S230也可替换为:终端设备将视觉补偿图像显示在显示屏上。本申请实施例在此不作限制。
在乘客使用车载娱乐信息系统的显示屏时,该显示屏上显示有视觉补偿图像和主界面,从而可以实现利用视觉补偿图像激励用户的视觉的同时,降低或者消除觉补偿图像对显示屏显示的主界面的影响。即在缓解或者消除晕车症状的基础上,降低对用户的使用车载娱乐信息系统的影响,从而进一步的提高用户体验。
本申请实施例提供的基于视觉补偿图像的防晕车方法,通过实时地生成视觉补偿图像,该视觉补偿图像可以反映出车辆相对道路的实时运动状态,该视觉补偿图像中包括到道路和车道线,道路和车道线分别为不同的颜色。将该视觉补偿图像叠加在车载信息娱乐系统的显示屏上进行显示,用户便可在该显示屏上看到自己相对于地面运动状态,从而缓解或者消除视觉神经与前庭神经的冲突,也就解决了晕车的问题。并且,在该视觉补偿图像中,除道路部分和车道线之外的其他区域为透明的,可以避免视觉补偿图像中道路背景的干扰,具有更好的可视化效果,可以在解决乘客晕车的基础上,提高该视觉补偿图像的准确性。在将视觉补偿图像叠加显示在显示屏时,可以调节视觉补偿图像的透明度,可以实现利用视觉补偿图像激励用户的视觉的同时,降低或者消除觉补偿图像对显示屏显示的主界面的影响,从而进一步的提高用户体验。
应理解,上述只是为了帮助本领域技术人员更好地理解本申请实施例,而非要限制本申请实施例的范围。本领域技术人员根据所给出的上述示例,显然可以进行各种等价的修改或变化,例如,上述方法实施例中的某些步骤可以不必须的,或者可以新加入某些步骤等。或者上述任意两种或者任意多种实施例的组合。这样的修改、变化或者组合后的方案也落入本申请实施例的范围内。
还应理解,本申请实施例中的方式、情况、类别以及实施例的划分仅是为了描述的方便,不应构成特别的限定,各种方式、类别、情况以及实施例中的特征在不矛盾的情况下可以相结合。
还应理解,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实 施过程构成任何限定。
还应理解,上文对本申请实施例的描述着重于强调各个实施例之间的不同之处,未提到的相同或相似之处可以互相参考,为了简洁,这里不再赘述。
还应理解,本申请实施例中,“预定义”或者“预设”可以通过在设备(中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。
上述结合图1-图10描述了本申请实施例提供的基于视觉补偿图像的防晕车方法的实施例,下面描述本申请实施例提供的相关设备。
本实施例可以根据上述方法,对各个设备(例如车内计算平台、车载娱乐信息系统、车内乘客使用的终端设备等)进行功能模块的划分。例如,可以对应各个功能,划分为各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
本申请实施例还提供了一种基于视觉补偿图像的防晕车系统,该系统包括:车内计算平台和车载娱乐信息系统。
可选的,该系统还可以包括:行车记录仪和/或运动传感器,例如如图1所示的。
其中,车载娱乐信息系统包括显示屏,该显示屏可以设置在前排座椅的背面,后排乘客可以利用该车载娱乐信息系统观看视频、图片、浏览网页等。
车内计算平台可以包括:CDC、车上的ECU、行车电脑、车载电脑或者车载T-BOX等。
运动传感器例如可以包括陀螺仪和加速度计等。运动传感器可以设置在车载娱乐信息系统中,或者设置在车内计算平台,或者设置在车内的其他位置上。
本申请实施例还提供了一种基于视觉补偿图像的防晕车系统,该系统包括:车内乘客使用的终端设备,该终端设备具有显示屏。
可选的,该系统还可以包括:行车记录仪和/或运动传感器。
本申请实施例提供的基于视觉补偿图像的防晕车系统,用于执行上述基于视觉补偿图像的防晕车方法,因此可以达到与上述实现方法相同的效果。
可以理解的是,本申请实施例示意的结构并不构成对基于视觉补偿图像的防晕车系统的具体限定。在本申请另一些实施例中,基于视觉补偿图像的防晕车系统可以包括更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
本申请实施例还提供了一种基于视觉补偿图像的防晕车装置,该装置例如可以为车内计算平台、车载娱乐信息系统或者为车内乘客使用的终端设备。在采用集成的单元的情况下,该装置可以包括处理模块、存储模块和通信模块。其中,处理模块可以用于对装置的动作进行控制管理。例如,可以用于支持装置执行处理单元执行的步骤。存储模块可以用于支持存储程序代码和数据等。通信模块,可以用于支持该装置与其他设备(例如运动传感器、行车记录仪等)的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP) 和微处理器的组合等等。存储模块可以是存储器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子设备交互的设备。
示例性地,图11示出了本申请提供的一例基于视觉补偿图像的防晕车装置500的硬件结构示意图,该装置500可以为上述的车内计算平台、车载娱乐信息系统、或者车内乘客使用的终端设备。如图11所示,该装置500可包括处理器510,外部存储器接口520,内部存储器521,通用串行总线(universal serial bus,USB)接口530,充电管理模块540,电源管理模块541,电池542,无线通信模块550等。
可以理解的是,本申请实施例示意的结构并不构成对装置500的具体限定。在本申请另一些实施例中,装置500可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
例如,装置500为车载娱乐信息系统或者终端设备时,该装置还可以包括显示屏。
处理器510可以包括一个或多个处理单元。例如:处理器510可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的部件,也可以集成在一个或多个处理器中。在一些实施例中,装置500也可以包括一个或多个处理器510。其中,控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
在一些实施例中,处理器510可以包括一个或多个接口。接口可以包括集成电路间(inter-integrated circuit,I2C)接口,集成电路间音频(integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,SIM卡接口,和/或USB接口等。其中,USB接口530是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口530可以用于连接充电器为装置500充电,也可以用于装置500与外围设备之间传输数据。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对装置500的结构限定。在本申请另一些实施例中,装置500也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
无线通信模块550可以提供应用在装置500上的包括Wi-Fi(包括Wi-Fi感知和Wi-Fi AP),蓝牙(bluetooth,BT),无线数传模块(例如,433MHz,868MHz,515MHz)等无线通信的解决方案。无线通信模块550可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块550经由天线1或者天线2(或者,天线1和天线2)接收电磁波,将电磁波信号滤波以及调频处理,将处理后的信号发送到处理器510。无线通信模块550还可以从处理器510接收待发送的信号,对其进行调频,放大,转为电磁波辐射出去。
外部存储器接口520可以用于连接外部存储卡,例如Micro SD卡,实现扩展装置 500的存储能力。外部存储卡通过外部存储器接口520与处理器510通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器521可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器510可以通过运行存储在内部存储器521的上述指令,从而使得装置500执行本申请一些实施例中所提供的广告获取的方法,以及各种应用以及数据处理等。内部存储器521可以包括代码存储区和数据存储区。其中,代码存储区可存储操作系统。数据存储区可存储装置500使用过程中所创建的数据等。此外,内部存储器521可以包括高速随机存取存储器,还可以包括非易失性存储器,例如一个或多个磁盘存储部件,闪存部件,通用闪存存储器(universal flash storage,UFS)等。在一些实施例中,处理器510可以通过运行存储在内部存储器521的指令,和/或存储在设置于处理器510中的存储器的指令,来使得装置500执行本申请实施例中所提供的广告获取的方法,以及其他应用及数据处理。
应理解,装置500执行上述相应步骤的具体过程请参照前文中结合图2、图3、图5中各个实施例中描述车内计算平台或者车载娱乐信息系统执行步骤的相关描述,为了简洁,这里不加赘述。
图12示出了本申请实施例的提供的另一例基于视觉补偿图像的防晕车装置600的示意性框图,该装置600可以对应上述方法实施例中描述的车内计算平台、车载娱乐信息系统、或者车内乘客使用的终端设备。也可以是应用于车内计算平台、车载娱乐信息系统、或者车内乘客使用的终端设备的芯片或组件,并且,该装置600中的各模块或单元分别用于执行上述方法实施例中描述的车内计算平台、车载娱乐信息系统、或者车内乘客使用的终端设备所执行的各动作或处理过程,如图12所示,该装置600可以包括:处理单元610和通信单元620。可选的,该装置600还可以包括存储单元630。
应理解,装置600中各单元执行上述相应步骤的具体过程请参照前文中结合图2、图3、图5中各个实施例中描述的车内计算平台或者车载娱乐信息系统执行步骤的相关描述,为了简洁,这里不加赘述。
可选的,通信单元620可以包括接收单元(模块)和发送单元(模块),用于执行前述各个方法实施例中终端设备或者广告服务器接收信息和发送信息的步骤。存储单元630用于存储处理单元610和通信单元620执行的指令。处理单元610、通信单元620和存储单元630通信连接,存储单元630存储指令,处理单元610用于执行存储单元存储的指令,通信单元620用于在处理单元610的驱动下执行具体的信号收发。
应理解,通信单元620可以是收发器、输入/输出接口或接口电路等,例如可以由图11所示实施例中的无线通信模块550实现。存储单元可以是存储器,例如,可以由图11所示实施例中的外部存储器接口520和内部存储器521实现。处理单元610可以由图11所示实施例中处理器510,或者可以由处理器510、以及外部存储器接口520、内部存储器521实现。
还应理解,以上装置中单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且装置中的单元可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分单元以软件通过处 理元件调用的形式实现,部分单元以硬件的形式实现。例如,各个单元可以为单独设立的处理元件,也可以集成在装置的某一个芯片中实现,此外,也可以以程序的形式存储于存储器中,由装置的某一个处理元件调用并执行该单元的功能。这里该处理元件又可以称为处理器,可以是一种具有信号处理能力的集成电路。在实现过程中,上述方法的各步骤或以上各个单元可以通过处理器元件中的硬件的集成逻辑电路实现或者以软件通过处理元件调用的形式实现。在一个例子中,以上任一装置中的单元可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个专用集成电路(application specific integrated circuit,ASIC),或,一个或多个数字信号处理器(digital signalprocessor,DSP),或,一个或者多个现场可编程门阵列(fieldprogrammable gate array,FPGA),或这些集成电路形式中至少两种的组合。再如,当装置中的单元可以通过处理元件调度程序的形式实现时,该处理元件可以是通用处理器,例如中央处理器(centralprocessing unit,CPU)或其它可以调用程序的处理器。再如,这些单元可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
本申请实施例还提供了一种芯片系统,如图13所示,该芯片系统包括至少一个处理器710和至少一个接口电路720。处理器710和接口电路720可通过线路互联。例如,接口电路720可用于从其它装置(例如车内计算平台或者车载娱乐信息系统)接收信号。又例如,接口电路720可用于向其它装置(例如处理器710)发送信号。示例性的,接口电路720可读取存储器中存储的指令,并将该指令发送给处理器710。当所述指令被处理器710执行时,可使得芯片系统执行上述实施例中的车内计算平台或者车载娱乐信息系统执行的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
还应理解,以上装置中单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且装置中的单元可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分单元以软件通过处理元件调用的形式实现,部分单元以硬件的形式实现。例如,各个单元可以为单独设立的处理元件,也可以集成在装置的某一个芯片中实现,此外,也可以以程序的形式存储于存储器中,由装置的某一个处理元件调用并执行该单元的功能。这里该处理元件又可以称为处理器,可以是一种具有信号处理能力的集成电路。在实现过程中,上述方法的各步骤或以上各个单元可以通过处理器元件中的硬件的集成逻辑电路实现或者以软件通过处理元件调用的形式实现。在一个例子中,以上任一装置中的单元可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个专用集成电路(application specific integrated circuit,ASIC),或,一个或多个数字信号处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA),或这些集成电路形式中至少两种的组合。再如,当装置中的单元可以通过处理元件调度程序的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序的处理器。再如,这些单元可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
本申请实施例还提供了一种计算机可读存储介质,用于存储计算机程序代码,该计算机程序包括用于执行上述本申请实施例提供的任意一种基于视觉补偿图像的防晕车方法 的指令。该可读介质可以是只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM),本申请实施例对此不做限制。
本申请还提供了一种计算机程序产品,该计算机程序产品包括指令,当该指令被执行时,以使得车内计算平台、基于视觉补偿图像的防晕车装置、车载娱乐信息系统、或者基于视觉补偿图像的防晕车系统执行对应于上述方法中的对应的操作。
本申请实施例还提供了一种位于通信装置中的芯片,该芯片包括:处理单元和通信单元,该处理单元,例如可以是处理器,该通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行计算机指令,以使所述通信装置执行上述本申请实施例提供的任一种基于视觉补偿图像的防晕车方法。
可选地,该计算机指令被存储在存储单元中。
可选地,该存储单元为该芯片内的存储单元,如寄存器、缓存等,该存储单元还可以是该终端内的位于该芯片外部的存储单元,如ROM或可存储静态信息和指令的其他类型的静态存储设备,随机RAM等。其中,上述任一处提到的处理器,可以是一个CPU,微处理器,ASIC,或一个或多个用于控制上述的反馈信息的传输方法的程序执行的集成电路。该处理单元和该存储单元可以解耦,分别设置在不同的物理设备上,通过有线或者无线的方式连接来实现该处理单元和该存储单元的各自的功能,以支持该系统芯片实现上述实施例中的各种功能。或者,该处理单元和该存储器也可以耦合在同一个设备上。
本申请实施例还提供了一种车辆,该车辆包括上述本申请实施例提供的基于视觉补偿图像的防晕车装置、基于视觉补偿图像的防晕车系统、芯片系统或者芯片。
其中,本实施例提供的基于视觉补偿图像的防晕车系统、基于视觉补偿图像的防晕车装置、车辆、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是ROM、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是RAM,其用作外部高速缓存。RAM有多种不同的类型,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
本申请的实施例中的方法可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机可读存储介 质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器等数据存储设备。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个可读存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的可读存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种基于视觉补偿图像的防晕车方法,其特征在于,所述方法包括:
    生成视觉补偿图像,所述视觉补偿图像反映车内乘客视角下车辆相对于道路的实时运动状态,所述视觉补偿图像中包括道路部分和车道线;
    将所述视觉补偿图像显示在车载娱乐信息系统的显示屏上。
  2. 根据权利要求1所述的方法,其特征在于,所述生成视觉补偿图像,包括:
    获取车辆内行车记录仪实时拍摄的图像;
    检测所述图像中的道路部分和车道线;
    将所述图像中道路部分和车道线填充为非透明色,将所述图像中道路部分和车道线之外的部分填充为透明色,得到所述视觉补偿图像,所述视觉补偿图像中道路部分和车道线的颜色不同。
  3. 根据权利要求1所述的方法,其特征在于,所述生成视觉补偿图像,包括:
    获取车辆内行车记录仪实时拍摄的图像;
    检测所述图像中的道路部分和车道线;
    将所述图像转换为二值图像,在所述二值图像中,所述道路部分和所述车道线为白色,所述道路部分和所述车道线之外的部分为黑色;
    将所述二值图像中道路部分和车道线填充为非透明色,将所述二值图像中道路部分和车道线之外的部分填充为透明色,得到所述视觉补偿图像,所述视觉补偿图像中道路部分和车道线的颜色不同。
  4. 根据权利要求1所述的方法,其特征在于,所述生成视觉补偿图像,包括:
    获取车辆内陀螺仪和加速度计实时检测的运动参数;
    根据所述运动参数,生成所述车辆的三维旋转向量(rx,ry,rz)和三维平移向量(tx,ty,tz);
    利用所述三维旋转向量(rx,ry,rz)和所述三维平移向量(tx,ty,tz),对预设的道路模型中的道路纹理进行处理,所述道路纹理包括道路部分和车道线,所述道路部分和所述车道线的颜色不同;
    将处理后的所述道路纹理叠加上所述道路模型上;
    对叠加所述处理后的道路纹理的道路模型进行刚体变换和透视变换,得到所述生成视觉补偿图像。
  5. 根据权利要求4所述的方法,其特征在于,所述利用所述三维旋转向量(rx,ry,rz)和所述三维平移向量(tx,ty,tz),对预设的道路模型进行处理,包括:
    利用所述三维旋转向量中Z轴上的曲率rz对所述道路纹理施加弯曲变形;
    利用所述三维平移向量中Y轴上的速度ty对所述道路纹理施加循环移动变形;
    其中,所述三维旋转向量(rx,ry,rz)和所述三维平移向量(tx,ty,tz)中的X轴的正方向为车身右手方向,Y轴的正方向为车头前进方向,Z轴的正方向为车身正上方。
  6. 根据权利要求5所述的方法,其特征在于,利用如下公式对所述道路纹理施加弯曲变形:
    其中,u’表示所述道路纹理产生对应曲率rz弯曲后u轴的值,v’表示所述道路纹理产生对应曲率rz弯曲后v轴的值,u表示所述道路纹理产生对应曲率rz弯曲前u轴的值,v表示所述道路纹理产生对应曲率rz弯曲前v轴的值,k为控制转向角度和纹理曲率的参数,所述道路纹理存中存在纹理坐标系,所述纹理坐标系包括u轴和v轴,u轴为垂直于车道线的方向,v轴为平行于车道线的方向。
  7. 根据权利要求5或6所述的方法,其特征在于,利用如下公式对所述道路纹理施加循环移动变形:
    其中,u’表示所述道路纹理产生对应速度ty循环移动后u轴的值,v’表示所述道路纹理产生对应速度ty循环移动后v轴的值,u表示所述道路纹理产生对应速度ty循环移动前u轴的值,v表示所述道路纹理产生对应速度ty循环移动前v轴的值,s为控制纹理移动速度的参数,所述道路纹理中存在纹理坐标系,所述纹理坐标系包括u轴和v轴,u轴为垂直于车道线的方向,v轴为平行于车道线的方向。
  8. 根据权利要求4至7中任一项所述的方法,其特征在于,对叠加所述处理后的道路纹理的道路模型进行刚体变换,包括:
    利用向量(rx,ry,tz)对叠加所述处理后的道路纹理叠的道路模型进行刚体变换。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,所述将所述视觉补偿图像显示在车载娱乐信息系统的显示屏上,包括:
    将所述视觉补偿图像叠加显示在显示屏显示的主界面上。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    利用如下公式来确定所述显示屏最终显示的所述主界面和视觉补偿图像:
    A=α×C+(1-α)I
    其中,C表示所述视觉补偿图像,I表示所述显示屏中显示的所述主界面,α为叠加显示时的透明度参数,α的取值范围为小于1并且大于0,A表示最终显示在所述显示屏上的所述主界面和所述视觉补偿图像,α为预先配置的。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述显示屏的显示界面上还存在用于用户手动调整所述视觉补偿图像透明度的控件。
  12. 一种基于视觉补偿图像的防晕车装置,其特征在于,所述装置包括:处理器及存储器;所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时执行如权利要求1至11中任一项所述的方法。
  13. 一种基于视觉补偿图像的防晕车系统,其特征在于,所述系统包括:车内计算平台和车载娱乐信息系统,所述系统用于执行如权利要求1至11中任一项所述的方法。
  14. 根据权利要求13所述的系统,其特征在于,所述系统还包括:行车记录仪和运动传感器中的至少一种,所述运动传感器包括:陀螺仪和加速度计。
  15. 一种车辆,其特征在于,所述车辆包括:权利要求12所述的防晕车装置,或者权利要求13或者14所述的防晕车系统。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储了计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1至11中任一项所述的方法。
  17. 一种计算机程序产品,其特征在于,所述计算机程序产品包括用于执行如权利要求1至11中任一项所述的方法的指令。
  18. 一种芯片,其特征在于,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的通信设备执行如权利要求1至11中任一项所述的方法。
PCT/CN2023/081367 2022-03-16 2023-03-14 基于视觉补偿图像的防晕车方法、装置和系统 WO2023174283A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210261485.4A CN116804918A (zh) 2022-03-16 2022-03-16 基于视觉补偿图像的防晕车方法、装置和系统
CN202210261485.4 2022-03-16

Publications (1)

Publication Number Publication Date
WO2023174283A1 true WO2023174283A1 (zh) 2023-09-21

Family

ID=88022317

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081367 WO2023174283A1 (zh) 2022-03-16 2023-03-14 基于视觉补偿图像的防晕车方法、装置和系统

Country Status (2)

Country Link
CN (1) CN116804918A (zh)
WO (1) WO2023174283A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237240A (zh) * 2023-11-15 2023-12-15 湖南蚁为软件有限公司 基于数据特征的数据智能采集方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110245902A1 (en) * 2010-03-30 2011-10-06 Katz Jay W Method for relieving motion sickness and related apparatus
CN110015235A (zh) * 2019-03-12 2019-07-16 浙江吉利汽车研究院有限公司 一种车载显示方法、装置及设备
CN111295307A (zh) * 2017-12-21 2020-06-16 宝马汽车股份有限公司 用于减轻晕车症状的系统和方法
CN113534466A (zh) * 2021-07-16 2021-10-22 Oppo广东移动通信有限公司 显示方法、装置、头戴式增强现实设备以及存储介质
CN113808058A (zh) * 2021-08-25 2021-12-17 惠州市德赛西威汽车电子股份有限公司 一种基于视觉模型的防晕车方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110245902A1 (en) * 2010-03-30 2011-10-06 Katz Jay W Method for relieving motion sickness and related apparatus
CN111295307A (zh) * 2017-12-21 2020-06-16 宝马汽车股份有限公司 用于减轻晕车症状的系统和方法
CN110015235A (zh) * 2019-03-12 2019-07-16 浙江吉利汽车研究院有限公司 一种车载显示方法、装置及设备
CN113534466A (zh) * 2021-07-16 2021-10-22 Oppo广东移动通信有限公司 显示方法、装置、头戴式增强现实设备以及存储介质
CN113808058A (zh) * 2021-08-25 2021-12-17 惠州市德赛西威汽车电子股份有限公司 一种基于视觉模型的防晕车方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237240A (zh) * 2023-11-15 2023-12-15 湖南蚁为软件有限公司 基于数据特征的数据智能采集方法及系统
CN117237240B (zh) * 2023-11-15 2024-02-02 湖南蚁为软件有限公司 基于数据特征的数据智能采集方法及系统

Also Published As

Publication number Publication date
CN116804918A (zh) 2023-09-26

Similar Documents

Publication Publication Date Title
EP3352050A1 (en) Information processing device, information processing method, and program
US8958599B1 (en) Input method and system based on ambient glints
CN110249317B (zh) 用于实时图像变换的无未命中高速缓存结构
WO2023174283A1 (zh) 基于视觉补偿图像的防晕车方法、装置和系统
CN106233188A (zh) 头戴式显示器及其控制方法
US11244496B2 (en) Information processing device and information processing method
WO2021018070A1 (zh) 一种图像显示的方法及电子设备
US11043024B2 (en) Method, apparatus, and storage medium for processing image in a virtual reality system
CN111553846B (zh) 超分辨率处理方法及装置
WO2021110034A1 (zh) 眼部定位装置、方法及3d显示设备、方法
TWI689842B (zh) 可穿戴裝置、控制可穿戴裝置的方法以及行動裝置
CN111768416A (zh) 照片裁剪方法及装置
JP7469510B2 (ja) 画像処理方法、装置、電子機器及びコンピュータ可読記憶媒体
WO2024021742A1 (zh) 一种注视点估计方法及相关设备
CN112116716A (zh) 基于检测到的对象来定位的虚拟内容
WO2022252924A1 (zh) 图像传输与显示方法、相关设备及系统
WO2023216580A1 (zh) 调节显示设备的方法和装置
CN113686350A (zh) 道路信息的显示方法、装置和智能可穿戴设备
CN112672076A (zh) 一种图像的显示方法和电子设备
CN110728744B (zh) 一种体绘制方法、装置及智能设备
WO2023001113A1 (zh) 一种显示方法与电子设备
CN112929645A (zh) 3d显示设备、系统和方法及3d视频数据通信方法
EP4086102B1 (en) Navigation method and apparatus, electronic device, readable storage medium and computer program product
CN113986165B (zh) 显示控制方法、电子装置及可读存储介质
CN111200727A (zh) 一种基于深度感知的三维图像处理系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23769785

Country of ref document: EP

Kind code of ref document: A1