I graduated with a Ph.D. degree from the electrical engineering department of Michigan Technological University (MTU) in 2021. My advisor was Dr. Jeremy P. Bos and my dissertation title is Light Field compression and manipulation via residual convolutional neural network.I also aquired my MS in Computer science from MTU in 2019.
I graduated from University of Mazandaran in 2012, where I earned my B.Sc. majored in solid-state Physics. I Have worked as a software and algorithm developer for two years before starting my Ph.D. at MTU.
Postprocessing of light fields enables us to extract more information from a scene compared to traditional cameras. Plenoptic cameras and camera arrays are two common methods for light field capture. It has been long recognized that the two devices are in some ways equivalent. Practically, both techniques have important constraints. Camera arrays are unable to provide high angular sampling, and the plenoptic camera can have a limited spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional viewpoint images and combining them into a four-dimensional light field. We present a transformation for converting between equivalent plenoptic configurations and camera arrays when they capture pristine light fields produced in simulation. We use this approach to simulate light fields of simple scenes and validate our transformation by comparing the focus distance of a standard plenoptic camera and the equivalent camera array’s light field. We also show how some simple practical effects can be added to the pristine, synthetic light field via postprocessing and their effect on refocusing distance.
Post-processing of light fields enables us to extract more information from a scene compared to traditional camera. Plenoptic cameras and camera arrays are two common methods for light-field capture. In fact, it has been long recognized that the two devices are in some ways equivalent. Practically though, light field capture via camera arrays results in poor angular sampling. Similarly, the plenoptic camera often suffers from relatively poor spatial sampling. In simulation, we can easily explore both constraints by simulating two-dimensional view point images and combining them into a four dimensional light field. In this work, we present a formalism for converting between equivalent plenoptic configurations and camera arrays. We use this approach to simulate a simple scene and explore the trade-offs in angular and spatial sampling in light-field capture.
In the past decade, Graphical Processor Unit (GPU) has introduced itself as a powerful accelerator in the computing area. The GPU designers added Cache hierarchy in order to improve the overall General Purpose GPU (GPGPU) solutions. However, the massive multithreading environment reduces the L1D cache share for each thread. Considering that more than a thousand of threads can be simultaneously scheduled on each streaming multiprocessor unit containing only 32 cores with up to 48KBs of L1D cache memory, each thread can receive a very small share of cached memory. Thus, thrashing is a common problem that modern GPUs are currently dealing with. A mindful solution to this type of thrashing is to reduce the number of active warps that can be scheduled when locality is found or bypass the warps without locality from the L1D cache. Such methods can increase the cache working set for the warps with higher locality and may lead to increase in performance, and further reduce power consumption. In this survey, we briefly introduce some management techniques that increased the cache efficiency of GPGPU in recent years. The policies that we are covering is consist of warp throttling, bypass, and bypass & insertion. Some bypass policies are ported directly from CPU to GPU, but some are tailored for GPU.
1400 Townsend Drive,
Houghton, MI 49931
surname .at. mtu .com
+1 906 Two75 Nine0Zero8