Research Interests: Patrick Marais
Dr Marais' research interest are primarily in the area of computer graphics, GPGPU algorithms and computer vision/image processing. Recent student projects include work in accelerating radio astronomy algorithms (for MeerKAT/SKA), laser-scan point processing, graph algorithms for fast path planning, efficient GPU simulation of sandy terrain and procedural modelling for virtual worlds. A list of possible projects is provided below – please note that this list is not exhaustive. Suggestions for projects in related areas are welcome.
Computer Graphics and Computer Vision
Point-cloud processing for laser-scanned models of heritage sites
Two projects are currently listed under this theme - however, there are many other problems to solve, so please come and chat to me if you would like to explore this area further.
Automated Hole Filling – Due to constraints on the positioning of the laser scanner, concavities in buildings, and occluding objects, such as trees, pedestrians and motor vehicles, point cloud data sets often contain gaps. An automated approach to filling these holes is to transplant complete surfaces from elsewhere in the model, using the edges around the hole as context. Current state-of-the-art techniques only search for matches across the point cloud, but this could be improved by taking into account other data sources – such as digital photographs – which might even offer a partial view of the obstructed area.
Irregularity-Sensitive Decimation – One approach to reducing the computation cost of point cloud processing is to simply reduce the number of points. However, most decimation algorithms start from an assumption of planarity and will discard important surface irregularities. This cannot be solved by applying a simplistic noise-tolerance cut-off, since a distinction needs to be made between natural surface irregularities and error arising from the scanning process.
Recent developments in multi-resolution point-set representations can be leveraged to identify important point-based structures as a precursor to point-set decimation. The identification of “interesting structures” in the point set - using scale-space analysis or a learning technique - would naturally assist with feature extraction and data de-noising. Context aware decimation can then be formulated in a way which preserves these important point structures and removes unnecessary point samples in highly regular or planar regionsExtending Voxel-space Shape grammars for 3D procedural modelling
Recent work by one of our students involved the creation of a 3D modelling system which uses a volumetric voxel representation to
create 3D models through a 'shape grammar'.
A number of extensions and refinements are possible, both to improve efficiency and to increase the utility of the modelling system. The grammar provides rules for encapsulating symmetry and combining building block elements through volumetric Constructive Solid geometry (CSG). It also provides rules for ‘detailing’ voxels which can be used to add surface detail, carve out interior regions etc.
Currently the system is unoptimized and only supports the creation of fairly small, low-resolution models. The fundamental goal of this project is to implement additional surface detailing functionality and to explore the use of a hybrid surface detailing method that combines meshes and textures with the underlying 3D voxel geometry.
Adaptive Field D* path planning
Field D* is a pathfinding algorithm which is capable of finding paths through weighted Triangle and Tetrahedral Meshes. This is in contrast to algorithms such as Dijkstra and A* which find shortest paths in a graph of weighted edges. To accomplish this, Field D* changes the typical graph edge cost function to a cost function which operates on triangles or tetrahedra. This cost function is necessarily more complex and requires minimisation, effectively meaning that Field D* or more computationally expensive than A* for example. It also has replanning capability - it is able to replan paths when path costs change while travelling on the path. We propose a MSc project focusing on minimising the computational expense involved in computing a shortest path with Field D*.
Adaptive Meshing is a technique from Finite Element Methods where a quick, approximate solution is found on a very coarse mesh (containing few triangles or tets). Sensitive areas of the mesh are then refined by introducing more triangles and tets, and a more accurate solution is then obtained. Utilising this approach, a quick A* or Field D* search could be performed on a very coarse mesh to obtain an approximate shortest path. Triangles/Tets adjacent to this path can then be subdivided and Field D*'s replanning capability used to improve the resulting path to within a certain error tolerance.
This approach is related to Hierarchical Path Planning, where an A* search is first plotted on a simplified graph. A similar scheme based on Progressive Mesh Refinement whereby similar triangles of similar cost are aggregated to produce the coarse mesh required above can be used.
Students wishing to undertake this project will have access to a PhD student who has implemented and published on the Field D* algorithm itself and can provide direction.
Visualisation of scientic data
Many scientific disciplines generate high dimensional data sets which are hard to interpret. The purpose of this theme is to provide tools to assist scientists in analysing their data. This encompasses both visualisation techniques, aimed at drawing attention to feaures of interest, as well as low-level graphics/rendering optimizations intended to speed up the investigation of large data sets with many graphical primtives. We are particularly interested in visualisation techniques suited to astronomy data, since this ties in to current projects in that area.
Many numerical algorithms are embarrassingly parallel - or would appear so initially. GPU's offer huge potential speedups for certain types of algorithms, since they have hundreds of (small) cores. However, the peculiarities of GPU architecture mean that this mapping is seldom simple. Our particular interest is on the efficient mapping and scaling of physical simulation algorithms (such as particle interactions) onto GPU's. Communication overheads and synchronization amongst GPU cores need to be carefully examined in these cases.
Currently, following on from decision to build Square Kilometer Array in South Africa, our main focus is on high-performance algorithms for radio-astronomy:
GPU base computing for astronomical algorithmsInterpreting and processing astronomical data requries significant computational resources. We are working with the Department of Astronomy to map current algorithms for radio interferometry onto the GPU. Current work includes 'source detection' algorithms - which examine a large data cube to isolate and categorise galactic cources - and GPU accelertaion of radio interferometry processing (direct fourier methods). Possible future projects involve parallelising galaxy evolution simulations and the segmentation (extraction) of galactic structures from large galaxy point cloud data sets.