• lens
  • Sensors
  • Featured

Fisheye lens

By: SKY ENGINE AI
scroll down ↓to find out more

Introduction

In fisheye lens, a two-step projection model is used, similarly as in the case of the pinhole lens. We define the following step

  1. Unit sphere projection - rays from lens focus are shot towards a uniformly sampled sphere and the whole scene is projected onto that sphere.
  2. Sphere-to-image projection plane mapping - in fisheye lens we implement an equidistant projection function, r ~ f * theta where r is equal to distance between center of projection (nodal point projection location on projection plane), f is equal to camera's focal length and theta is equal to angle between ray and camera optical axis.

Scene configuration

First, we define logger, root paths, renderer context, and example assistant.

from skyrenderer.core.logger_config import configure_logger
from skyrenderer.scene.renderer_context import RendererContext
from skyrenderer.example_assistant.display_config import DisplayConfig
from skyrenderer.example_assistant.visualization_settings import VisualizationDestination, VisualizedOutput
from skyrenderer.example_assistant.example_assistant import ExampleAssistant
from skyrenderer.utils.tutorial_config import get_tutorial_root_paths
logger = configure_logger()
root_paths_config = get_tutorial_root_paths()
renderer_context = RendererContext(root_paths_config) display_config = DisplayConfig( visualization_destination=VisualizationDestination.SKY_ENGINE_VIEWER_ONLY, visualized_outputs={VisualizedOutput.RENDER, VisualizedOutput.SEMANTIC}, output_files_path="visualization_files", cv_waitkey=0, ) example_assistant = ExampleAssistant(context=renderer_context, display_config=display_config)
2024-11-11 11:00:36,253 | skyrenderer.scene.renderer_context | INFO: Root paths: - root path: /dli/skyenvironment/skyrenderer/skyrenderer - assets path: /dli/mount/assets/ren_tutorials - config path: /dli/skyenvironment/skyrenderer/skyrenderer/config - gpu sources path: /dli/skyenvironment/skyrenderer/skyrenderer/optix_sources/sources - cache path: /dli/mount/cache - ptx cache path: compiled_ptx/ptx - ocio path: ocio_configs/aces_1.2/config.ocio 2024-11-11 11:00:36,253 | skyrenderer.core.asset_manager.asset_manager | INFO: Syncing assets...

Then we could set up a scene with a single plane

import numpy as np from skyrenderer.scene.scene_layout.layout_elements_definitions import LocusDefinition from skyrenderer.basic_types.locus.transform import Transform from skyrenderer.scene.scene_layout.layout_elements_definitions import GeometryDefinition from skyrenderer.basic_types.procedure import PlaneIntersector from skyrenderer.basic_types.locus.rotation import Rotation renderer_context.add_node( "plane_GEO", locus_def=LocusDefinition( transform=Transform(rotation=Rotation.from_matrix([[1, 0, 0], [0, 0, -1], [0, 1, 0]])) ), ) renderer_context.set_geometry_definition( "plane_GEO", GeometryDefinition( intersector=PlaneIntersector(renderer_context), buffer_provider=None, parameter_set=PlaneIntersector.create_parameter_provider(renderer_context, dims=(15, 15)), ), )

Pinhole deformations as well as distortion will be better visible if material associated with plane shows UV layout test grid

from skyrenderer.scene.scene_layout.layout_elements_definitions import MaterialDefinition from skyrenderer.basic_types.procedure import PBRShader from skyrenderer.basic_types.provider import FileTextureProvider test_material_definition = MaterialDefinition( parameter_set=PBRShader.create_parameter_provider(renderer_context), texture_provider=FileTextureProvider(renderer_context, "test_grid_col"), ) renderer_context.set_material_definition("plane_GEO", test_material_definition)

Then we define a few basic lights, which ensure good plane visibility

from skyrenderer.basic_types.light.sphere_light import SphereLight renderer_context.add_node("lights", locus_def=LocusDefinition(transform=Transform(translation_vector=[0, 0, 0]))) renderer_context.remove_light("light_LIGHT_NUL") renderer_context.add_node( "light_00", "lights", locus_def=LocusDefinition(transform=Transform(translation_vector=[1, 3, -3])) ) 'light_00' light_provider = SphereLight.create_parameter_provider(renderer_context, color=(0.2, 0.2, 0.2), illuminance=25) renderer_context.set_light(SphereLight(renderer_context, "light_00", light_provider)) renderer_context.add_node( "light_01", "lights", locus_def=LocusDefinition(transform=Transform(translation_vector=[-1, 3, -3])) ) renderer_context.set_light(SphereLight(renderer_context, "light_01", light_provider)) renderer_context.add_node( "light_02", "lights", locus_def=LocusDefinition(transform=Transform(translation_vector=[0, 3, 3])) ) renderer_context.set_light(SphereLight(renderer_context, "light_02", light_provider))

Fisheye lens configuration

First, we have to define image resolution, camera focal length and image plane center of projection coordinates. Similarly to pinhole, sensor dimensions information is included and therefore, axis related focal lengths need to be defined.

renderer_context.add_node( "camera_CAM_NUL", locus_def=LocusDefinition(transform=Transform(translation_vector=[0, 0, 3])) ) fisheye_params = VisibleLightRenderStep.create_parameter_provider(renderer_context, antialiasing_level=5) lens = FisheyeLens( renderer_context, FisheyeLens.create_parameter_provider( renderer_context, fx=camera_fx, fy=camera_fy, cx_relative=0.5, cy_relative=0.5, dist_k1=dist_k1, dist_k2=dist_k2, dist_k3=dist_k3, dist_k4=dist_k4, ), ) rs = VisibleLightRenderStep( renderer_context, lens=lens, origin_name="camera_CAM_NUL", parameter_provider=fisheye_params, target_name="top_node", ) renderer_context.define_render_chain(RenderChain(render_steps=[rs], width=WIDTH, height=HEIGHT))

Semantic info and visualization

Finally, we add semantic info (to visualize semantic map deformation) and visualize render

renderer_context.set_semantic_class("plane_GEO", 1) renderer_context.setup() logger.info(f"Scene\n{str(renderer_context)}") with example_assistant.get_visualizer() as visualizer: res = renderer_context.render_to_numpy(0) visualizer(res) renderer_context.tear_down()
2024-11-11 11:00:42,484 | skyrenderer.render_chain.camera_steps.Lens.fisheye_lens | INFO: Distortion back mapping, it may take a while... 2024-11-11 11:00:43,104 | skyrenderer.render_chain.camera_steps.Lens.fisheye_lens | INFO: Distortion back mapping calculated! 2024-11-11 11:00:45,608 | skyrenderer.utils.time_measurement | INFO: Setup time: 3.20 seconds 2024-11-11 11:00:45,610 | skyrenderer | INFO: Scene scene_tree: top_node (NO_CLASS; count: 1) |-- plane_GEO (semantic class: 1, ; count: 1) |-- lights (NO_CLASS; count: 1) | |-- light_00 (NO_CLASS; count: 1) | |-- light_01 (NO_CLASS; count: 1) | +-- light_02 (NO_CLASS; count: 1) +-- camera_CAM_NUL (NO_CLASS; count: 1) 2024-11-11 11:00:46,225 | skyrenderer.utils.time_measurement | INFO: Context update time: 615 ms 2024-11-11 11:00:48,750 | skyrenderer.utils.time_measurement | INFO: Key points calculation time: 0 ms 2024-11-11 11:00:48,752 | skyrenderer.utils.time_measurement | INFO: Render time: 2.53 seconds

OpenCV undistortion

As mentioned above, distortion model is fully compatible with OpenCV. Therefore, typical undistortion step could be executed.

import cv2 as cv distorted_image = res["RENDER"] distorted_semantic_mask = res["SEMANTIC"] camera_matrix = np.array([[camera_fx, 0, camera_cx], [0, camera_fy, camera_cy], [0, 0, 1]], np.float) dist_coeffs = np.array([dist_k1, dist_k2, dist_k3, dist_k4], np.float) map_1, map_2 = cv.fisheye.initUndistortRectifyMap( camera_matrix, dist_coeffs, None, camera_matrix, (WIDTH, HEIGHT), cv.CV_32FC1 ) img_undistorted = cv.remap(distorted_image, map_1, map_2, cv.INTER_LINEAR) img_semantic_undistorted = cv.remap(distorted_semantic_mask, map_1, map_2, cv.INTER_LINEAR) res["RENDER"] = img_undistorted res["SEMANTIC"] = img_semantic_undistorted with example_assistant.get_visualizer() as visualizer: visualizer(res)

The end.