Research Interests: Patrick Marais
Patrick's research interests are primarily in the area of computer graphics, GPGPU algorithms and computer vision/image processing. Recent student projects include work in accelerating radio astronomy algorithms (for MeerKAT/SKA), laser-scan point processing, graph algorithms for fast path planning, efficient GPU simulation of sandy terrain and procedural modelling for virtual worlds. A list of possible projects is provided below – please note that this list is not exhaustive. Suggestions for projects in related areas are welcome.
Computer Graphics and Computer Vision
Point-cloud processing for laser-scanned models of heritage sites
Three projects are currently listed under this theme - however, there are many other problems to solve, so please come and chat to me if you would like to explore this area further.
Automated Hole Filling– Due to constraints on the positioning of the laser scanner, concavities in buildings, and occluding objects, such as trees, pedestrians and motor vehicles, point cloud data sets often contain gaps. An automated approach to filling these holes is to transplant complete surfaces from elsewhere in the model, using the edges around the hole as context. Current state-of-the-art techniques only search for matches across the point cloud, but this could be improved by taking into account other data sources – such as digital photographs – which might even offer a partial view of the obstructed area.
Irregularity-Sensitive Decimation – One approach to reducing the computational cost of point cloud processing is to simply reduce the number of points. However, most decimation algorithms start from an assumption of planarity and will discard important surface irregularities. This cannot be solved by applying a simplistic noise-tolerance cut-off, since a distinction needs to be made between natural surface irregularities and error arising from the scanning process.
Recent developments in multi-resolution point-set representations can be leveraged to identify important point-based structures as a precursor to point-set decimation. The identification of “interesting structures” in the point set - using scale-space analysis or a learning technique - would naturally assist with feature extraction and data de-noising. Context aware decimation can then be formulated in a way which preserves these important point structures and removes unnecessary point samples in highly regular or planar regions.
Semi-automated removal of point cloud artifacts - laser scanners always pick up unwanted objects such as people, trees and power lines. "Cleaning" these scans take many hours of labour and slows down the construction of the final 3D model. This project is looking to automate the removal of these artifacts from laser scanned objects so that only minimal user intervention is required. One approach is a "magic brush" that selects similar structures when an example structure is selected. The magic is in finding the algorithm and parameters that can accomplish this!
Visualisation of Scientific data
Many scientific disciplines generate high dimensional data sets which are hard to interpret. The purpose of this theme is to provide tools to assist scientists in analysing their data. This encompasses both visualisation techniques, aimed at drawing attention to feaures of interest, as well as low-level graphics/rendering optimizations intended to speed up the investigation of large data sets with many graphical primitives. We are particularly interested in visualisation techniques suited to the massive data sets produced by modern astronomical instruments and high-fidelity simulations of cosmic evolution.
GPGPU and manycore Computing/Simulation
Following on from decision to build Square Kilometer Array in South Africa, our main focus is on high-performance algorithms for radio-astronomy.
Many numerical algorithms are embarrassingly parallel - or would appear
so initially. GPU's and the new generation of many-integrated-core (MIC) devices, such as Intel's Xeon Phi, offer huge potential speedups for certain types of
algorithms. However, the
peculiarities of these architectures mean that this mapping is seldom
simple. Our particular interest is on the efficient mapping and scaling
of physical simulation algorithms (such as particle interactions) onto
GPU's and MIC. Another important component to these projects is dealing with Big Data: the SKA will produce exabytes of data at extremely high data rates. Analysing this in real-time or near real-time is a challenge which remains to be tackled. Any useful algorithm must thus scale very well.
GPU/MIC based computing for astronomical algorithmsInterpreting and processing astronomical data requries significant computational resources. We are working with the Department of Astronomy and the SKA science team to map current algorithms for radio interferometry onto the GPU. Current work includes 'source detection' algorithms - which examine a large data cube to isolate and categorize galactic sources - and GPU acceleration of radio interferometry processing. Possible projects - there are many! - include
- parallelising galaxy evolution
simulations e.g dark matter simulations;
- the segmentation (extraction) of galactic structures from large galaxy point cloud data sets;
- parallelising radio telescope calibration computations for the SKA (which can build on existing work)
These algorithms have huge potential of novelty and could, if they work well, impact on the kind of science the SKA can do.